Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 2h1m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc0015f3fc8>: { error: <*errors.withMessage | 0xc0011bcdc0>{ cause: <*errors.errorString | 0xc0007956b0>{ s: "error container run failed with exit code 137", }, msg: "Unable to run conformance tests", }, stack: [0x1a97f78, 0x1adc389, 0x7b9691, 0x7b9085, 0x7b875b, 0x7be4c9, 0x7bdeb2, 0x7def91, 0x7decb6, 0x7de305, 0x7e0745, 0x7ec929, 0x7ec73e, 0x1af7c92, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 137 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-uvsf3n INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-uvsf3n" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-qt17ut" using the "upgrades-cgroupfs" template (Kubernetes v1.22.17, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-qt17ut --infrastructure (default) --kubernetes-version v1.22.17 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-qt17ut-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-qt17ut-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-qt17ut-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-qt17ut-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-qt17ut created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-qt17ut-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-qt17ut-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-uvsf3n/k8s-upgrade-and-conformance-qt17ut-x7fnr to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-uvsf3n/k8s-upgrade-and-conformance-qt17ut-x7fnr to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.16 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-uvsf3n/k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg to be upgraded to v1.23.16 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.23.16 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-uvsf3n/k8s-upgrade-and-conformance-qt17ut-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-uvsf3n/k8s-upgrade-and-conformance-qt17ut-mp-0 to be upgraded from v1.22.17 to v1.23.16 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.23.16 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--num-nodes=4" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "-ginkgo.v=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1675350066�[0m - Will randomize all specs Will run �[1m7052�[0m specs Running in parallel across �[1m4�[0m nodes Feb 2 15:01:13.499: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:01:13.502: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 2 15:01:13.522: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 2 15:01:13.583: INFO: The status of Pod coredns-bd6b6df9f-cjq7t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:13.583: INFO: The status of Pod coredns-bd6b6df9f-mzrpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:13.583: INFO: The status of Pod kindnet-n7j8k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:13.584: INFO: The status of Pod kindnet-vmfkv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:13.584: INFO: The status of Pod kube-proxy-8j7vf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:13.584: INFO: The status of Pod kube-proxy-r9grs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:13.584: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 2 15:01:13.584: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Feb 2 15:01:13.584: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 15:01:13.584: INFO: coredns-bd6b6df9f-cjq7t k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:13.584: INFO: coredns-bd6b6df9f-mzrpm k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC }] Feb 2 15:01:13.584: INFO: kindnet-n7j8k k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:49 +0000 UTC }] Feb 2 15:01:13.584: INFO: kindnet-vmfkv k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:40 +0000 UTC }] Feb 2 15:01:13.584: INFO: kube-proxy-8j7vf k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:13.584: INFO: kube-proxy-r9grs k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC }] Feb 2 15:01:13.585: INFO: Feb 2 15:01:15.634: INFO: The status of Pod coredns-bd6b6df9f-cjq7t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:15.634: INFO: The status of Pod coredns-bd6b6df9f-mzrpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:15.634: INFO: The status of Pod kindnet-n7j8k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:15.634: INFO: The status of Pod kindnet-vmfkv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:15.634: INFO: The status of Pod kube-proxy-8j7vf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:15.634: INFO: The status of Pod kube-proxy-r9grs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:15.634: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Feb 2 15:01:15.634: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Feb 2 15:01:15.634: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 15:01:15.634: INFO: coredns-bd6b6df9f-cjq7t k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:15.634: INFO: coredns-bd6b6df9f-mzrpm k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC }] Feb 2 15:01:15.634: INFO: kindnet-n7j8k k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:49 +0000 UTC }] Feb 2 15:01:15.634: INFO: kindnet-vmfkv k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:40 +0000 UTC }] Feb 2 15:01:15.634: INFO: kube-proxy-8j7vf k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:15.634: INFO: kube-proxy-r9grs k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC }] Feb 2 15:01:15.634: INFO: Feb 2 15:01:17.623: INFO: The status of Pod coredns-bd6b6df9f-cjq7t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:17.623: INFO: The status of Pod coredns-bd6b6df9f-mzrpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:17.623: INFO: The status of Pod kindnet-n7j8k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:17.623: INFO: The status of Pod kindnet-vmfkv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:17.623: INFO: The status of Pod kube-proxy-8j7vf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:17.623: INFO: The status of Pod kube-proxy-r9grs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:17.623: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Feb 2 15:01:17.623: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Feb 2 15:01:17.623: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 15:01:17.623: INFO: coredns-bd6b6df9f-cjq7t k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:17.623: INFO: coredns-bd6b6df9f-mzrpm k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC }] Feb 2 15:01:17.623: INFO: kindnet-n7j8k k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:49 +0000 UTC }] Feb 2 15:01:17.623: INFO: kindnet-vmfkv k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:40 +0000 UTC }] Feb 2 15:01:17.623: INFO: kube-proxy-8j7vf k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:17.624: INFO: kube-proxy-r9grs k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC }] Feb 2 15:01:17.624: INFO: Feb 2 15:01:19.629: INFO: The status of Pod coredns-bd6b6df9f-cjq7t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:19.629: INFO: The status of Pod coredns-bd6b6df9f-mzrpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:19.629: INFO: The status of Pod kindnet-n7j8k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:19.629: INFO: The status of Pod kindnet-vmfkv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:19.629: INFO: The status of Pod kube-proxy-8j7vf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:19.629: INFO: The status of Pod kube-proxy-r9grs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:19.629: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Feb 2 15:01:19.629: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Feb 2 15:01:19.629: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 15:01:19.629: INFO: coredns-bd6b6df9f-cjq7t k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:19.629: INFO: coredns-bd6b6df9f-mzrpm k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC }] Feb 2 15:01:19.629: INFO: kindnet-n7j8k k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:49 +0000 UTC }] Feb 2 15:01:19.629: INFO: kindnet-vmfkv k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:40 +0000 UTC }] Feb 2 15:01:19.629: INFO: kube-proxy-8j7vf k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:19.629: INFO: kube-proxy-r9grs k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC }] Feb 2 15:01:19.629: INFO: Feb 2 15:01:21.632: INFO: The status of Pod coredns-bd6b6df9f-cjq7t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:21.632: INFO: The status of Pod coredns-bd6b6df9f-mzrpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:21.632: INFO: The status of Pod kindnet-n7j8k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:21.632: INFO: The status of Pod kindnet-vmfkv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:21.632: INFO: The status of Pod kube-proxy-8j7vf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:21.632: INFO: The status of Pod kube-proxy-r9grs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:21.632: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Feb 2 15:01:21.632: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Feb 2 15:01:21.632: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 15:01:21.632: INFO: coredns-bd6b6df9f-cjq7t k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:21.632: INFO: coredns-bd6b6df9f-mzrpm k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC }] Feb 2 15:01:21.632: INFO: kindnet-n7j8k k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:49 +0000 UTC }] Feb 2 15:01:21.632: INFO: kindnet-vmfkv k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:40 +0000 UTC }] Feb 2 15:01:21.632: INFO: kube-proxy-8j7vf k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:21.632: INFO: kube-proxy-r9grs k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC }] Feb 2 15:01:21.632: INFO: Feb 2 15:01:23.628: INFO: The status of Pod coredns-bd6b6df9f-cjq7t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:23.628: INFO: The status of Pod coredns-bd6b6df9f-mzrpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:23.628: INFO: The status of Pod kindnet-n7j8k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:23.628: INFO: The status of Pod kindnet-vmfkv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:23.628: INFO: The status of Pod kube-proxy-8j7vf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:23.628: INFO: The status of Pod kube-proxy-r9grs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:23.628: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Feb 2 15:01:23.628: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Feb 2 15:01:23.628: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 15:01:23.628: INFO: coredns-bd6b6df9f-cjq7t k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:23.628: INFO: coredns-bd6b6df9f-mzrpm k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC }] Feb 2 15:01:23.628: INFO: kindnet-n7j8k k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:49 +0000 UTC }] Feb 2 15:01:23.628: INFO: kindnet-vmfkv k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:40 +0000 UTC }] Feb 2 15:01:23.628: INFO: kube-proxy-8j7vf k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:23.628: INFO: kube-proxy-r9grs k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC }] Feb 2 15:01:23.628: INFO: Feb 2 15:01:25.627: INFO: The status of Pod coredns-bd6b6df9f-cjq7t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:25.628: INFO: The status of Pod coredns-bd6b6df9f-mzrpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:25.628: INFO: The status of Pod kindnet-n7j8k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:25.628: INFO: The status of Pod kindnet-vmfkv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:25.628: INFO: The status of Pod kube-proxy-8j7vf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:25.628: INFO: The status of Pod kube-proxy-r9grs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:25.628: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Feb 2 15:01:25.628: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Feb 2 15:01:25.628: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 15:01:25.628: INFO: coredns-bd6b6df9f-cjq7t k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:25.628: INFO: coredns-bd6b6df9f-mzrpm k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC }] Feb 2 15:01:25.628: INFO: kindnet-n7j8k k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:49 +0000 UTC }] Feb 2 15:01:25.628: INFO: kindnet-vmfkv k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:40 +0000 UTC }] Feb 2 15:01:25.628: INFO: kube-proxy-8j7vf k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:25.628: INFO: kube-proxy-r9grs k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC }] Feb 2 15:01:25.628: INFO: Feb 2 15:01:27.628: INFO: The status of Pod coredns-bd6b6df9f-cjq7t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:27.628: INFO: The status of Pod coredns-bd6b6df9f-mzrpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:27.628: INFO: The status of Pod kindnet-n7j8k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:27.628: INFO: The status of Pod kindnet-vmfkv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:27.628: INFO: The status of Pod kube-proxy-8j7vf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:27.628: INFO: The status of Pod kube-proxy-r9grs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:27.628: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Feb 2 15:01:27.628: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Feb 2 15:01:27.628: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 15:01:27.628: INFO: coredns-bd6b6df9f-cjq7t k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:27.628: INFO: coredns-bd6b6df9f-mzrpm k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:30 +0000 UTC }] Feb 2 15:01:27.628: INFO: kindnet-n7j8k k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:49 +0000 UTC }] Feb 2 15:01:27.628: INFO: kindnet-vmfkv k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:56:40 +0000 UTC }] Feb 2 15:01:27.628: INFO: kube-proxy-8j7vf k8s-upgrade-and-conformance-qt17ut-worker-vzi0ey Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:58:45 +0000 UTC }] Feb 2 15:01:27.628: INFO: kube-proxy-r9grs k8s-upgrade-and-conformance-qt17ut-worker-4qu639 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:00:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 14:59:05 +0000 UTC }] Feb 2 15:01:27.628: INFO: Feb 2 15:01:29.648: INFO: The status of Pod coredns-bd6b6df9f-7rwlc is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:29.648: INFO: The status of Pod coredns-bd6b6df9f-87mgb is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:01:29.648: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Feb 2 15:01:29.648: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Feb 2 15:01:29.648: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 15:01:29.648: INFO: coredns-bd6b6df9f-7rwlc k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:01:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:01:28 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:01:28 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:01:28 +0000 UTC }] Feb 2 15:01:29.648: INFO: coredns-bd6b6df9f-87mgb k8s-upgrade-and-conformance-qt17ut-worker-cnnqas Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:01:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:01:28 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:01:28 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:01:28 +0000 UTC }] Feb 2 15:01:29.648: INFO: Feb 2 15:01:31.628: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Feb 2 15:01:31.628: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 2 15:01:31.628: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 2 15:01:31.634: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 2 15:01:31.634: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 2 15:01:31.634: INFO: e2e test version: v1.23.16 Feb 2 15:01:31.636: INFO: kube-apiserver version: v1.23.16 Feb 2 15:01:31.637: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:01:31.645: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Feb 2 15:01:31.657: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:01:31.686: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Feb 2 15:01:31.657: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:01:31.694: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Feb 2 15:01:31.674: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:01:31.702: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:31.806: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events W0202 15:01:31.868470 16 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Feb 2 15:01:31.868: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace �[1mSTEP�[0m: listing events with field selection filtering on source �[1mSTEP�[0m: listing events with field selection filtering on reportingController �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: updating the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:01:32.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-269" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:31.821: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc W0202 15:01:31.884832 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Feb 2 15:01:31.884: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics Feb 2 15:01:32.676: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-qt17ut-x7fnr-d4zrz is Running (Ready = true) Feb 2 15:01:32.809: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:01:32.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-6785" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":1,"skipped":32,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:32.863: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslicemirroring �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: mirroring a new custom Endpoint Feb 2 15:01:32.973: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 �[1mSTEP�[0m: mirroring an update to a custom Endpoint Feb 2 15:01:34.997: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 �[1mSTEP�[0m: mirroring deletion of a custom Endpoint Feb 2 15:01:37.016: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:01:39.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslicemirroring-7171" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":2,"skipped":41,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:39.268: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Starting the proxy Feb 2 15:01:39.321: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1122 proxy --unix-socket=/tmp/kubectl-proxy-unix3068178617/test' �[1mSTEP�[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:01:39.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1122" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":3,"skipped":92,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:31.724: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota W0202 15:01:31.811907 20 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Feb 2 15:01:31.812: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota with best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a best-effort pod �[1mSTEP�[0m: Ensuring resource quota with best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not best effort ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a not best-effort pod �[1mSTEP�[0m: Ensuring resource quota with not best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with best effort scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:01:48.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-5040" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:39.536: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Feb 2 15:01:39.584: INFO: The status of Pod annotationupdate07b5d640-753f-4c04-901f-f2dc8e8af0d4 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:01:41.606: INFO: The status of Pod annotationupdate07b5d640-753f-4c04-901f-f2dc8e8af0d4 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:01:43.593: INFO: The status of Pod annotationupdate07b5d640-753f-4c04-901f-f2dc8e8af0d4 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:01:45.592: INFO: The status of Pod annotationupdate07b5d640-753f-4c04-901f-f2dc8e8af0d4 is Running (Ready = true) Feb 2 15:01:46.177: INFO: Successfully updated pod "annotationupdate07b5d640-753f-4c04-901f-f2dc8e8af0d4" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:01:50.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8322" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":117,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:48.175: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Feb 2 15:01:48.314: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d89a2ae-66ef-4df9-8649-dab279731384" in namespace "projected-1740" to be "Succeeded or Failed" Feb 2 15:01:48.330: INFO: Pod "downwardapi-volume-0d89a2ae-66ef-4df9-8649-dab279731384": Phase="Pending", Reason="", readiness=false. Elapsed: 15.614704ms Feb 2 15:01:50.339: INFO: Pod "downwardapi-volume-0d89a2ae-66ef-4df9-8649-dab279731384": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02405002s Feb 2 15:01:52.347: INFO: Pod "downwardapi-volume-0d89a2ae-66ef-4df9-8649-dab279731384": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032050279s Feb 2 15:01:54.352: INFO: Pod "downwardapi-volume-0d89a2ae-66ef-4df9-8649-dab279731384": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037843588s �[1mSTEP�[0m: Saw pod success Feb 2 15:01:54.352: INFO: Pod "downwardapi-volume-0d89a2ae-66ef-4df9-8649-dab279731384" satisfied condition "Succeeded or Failed" Feb 2 15:01:54.358: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-cnnqas pod downwardapi-volume-0d89a2ae-66ef-4df9-8649-dab279731384 container client-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:01:54.418: INFO: Waiting for pod downwardapi-volume-0d89a2ae-66ef-4df9-8649-dab279731384 to disappear Feb 2 15:01:54.440: INFO: Pod downwardapi-volume-0d89a2ae-66ef-4df9-8649-dab279731384 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:01:54.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1740" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":41,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:54.607: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:01:54.650: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:01:55.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-8801" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":3,"skipped":84,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:32.220: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Deployment �[1mSTEP�[0m: waiting for Deployment to be created �[1mSTEP�[0m: waiting for all Replicas to be Ready Feb 2 15:01:32.279: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 15:01:32.279: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 15:01:32.293: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 15:01:32.293: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 15:01:32.319: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 15:01:32.319: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 15:01:32.396: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 15:01:32.396: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 15:01:38.473: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 and labels map[test-deployment-static:true] Feb 2 15:01:38.473: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 and labels map[test-deployment-static:true] Feb 2 15:01:38.681: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 and labels map[test-deployment-static:true] �[1mSTEP�[0m: patching the Deployment Feb 2 15:01:38.827: INFO: observed event type ADDED �[1mSTEP�[0m: waiting for Replicas to scale Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 0 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:38.840: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:38.841: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:38.841: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:38.895: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:38.895: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:39.026: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:39.026: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:39.068: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:39.068: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:39.096: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:39.096: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:40.936: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:40.936: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:40.993: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 �[1mSTEP�[0m: listing Deployments Feb 2 15:01:41.000: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] �[1mSTEP�[0m: updating the Deployment Feb 2 15:01:41.031: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 �[1mSTEP�[0m: fetching the DeploymentStatus Feb 2 15:01:41.047: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 15:01:41.068: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 15:01:41.123: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 15:01:41.159: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 15:01:41.215: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 15:01:42.944: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 15:01:49.072: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 15:01:49.212: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 15:01:49.572: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 15:01:58.520: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] �[1mSTEP�[0m: patching the DeploymentStatus �[1mSTEP�[0m: fetching the DeploymentStatus Feb 2 15:01:58.588: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:58.591: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:58.591: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:58.591: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:58.592: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 1 Feb 2 15:01:58.592: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:58.592: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 3 Feb 2 15:01:58.592: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:58.592: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 2 Feb 2 15:01:58.592: INFO: observed Deployment test-deployment in namespace deployment-6192 with ReadyReplicas 3 �[1mSTEP�[0m: deleting the Deployment Feb 2 15:01:58.604: INFO: observed event type MODIFIED Feb 2 15:01:58.605: INFO: observed event type MODIFIED Feb 2 15:01:58.605: INFO: observed event type MODIFIED Feb 2 15:01:58.605: INFO: observed event type MODIFIED Feb 2 15:01:58.605: INFO: observed event type MODIFIED Feb 2 15:01:58.606: INFO: observed event type MODIFIED Feb 2 15:01:58.607: INFO: observed event type MODIFIED Feb 2 15:01:58.607: INFO: observed event type MODIFIED Feb 2 15:01:58.607: INFO: observed event type MODIFIED Feb 2 15:01:58.607: INFO: observed event type MODIFIED Feb 2 15:01:58.607: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Feb 2 15:01:58.625: INFO: Log out all the ReplicaSets if there is no deployment created Feb 2 15:01:58.640: INFO: ReplicaSet "test-deployment-5ddd8b47d8": &ReplicaSet{ObjectMeta:{test-deployment-5ddd8b47d8 deployment-6192 346c6fa6-dc54-45c4-88a0-7d9c6bc9b64f 2361 4 2023-02-02 15:01:38 +0000 UTC <nil> <nil> map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 45c1e4de-cb2d-4377-8c83-53103392fd27 0xc003b093e7 0xc003b093e8}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45c1e4de-cb2d-4377-8c83-53103392fd27\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:01:58 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 5ddd8b47d8,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.6 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003b09470 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:01:58.660: INFO: pod: "test-deployment-5ddd8b47d8-sdjqz": &Pod{ObjectMeta:{test-deployment-5ddd8b47d8-sdjqz test-deployment-5ddd8b47d8- deployment-6192 989fbf47-5ebf-4493-af6b-5564f631cea8 2357 0 2023-02-02 15:01:41 +0000 UTC 2023-02-02 15:01:59 +0000 UTC 0xc003886e40 map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-5ddd8b47d8 346c6fa6-dc54-45c4-88a0-7d9c6bc9b64f 0xc003886e77 0xc003886e78}] [] [{kube-controller-manager Update v1 2023-02-02 15:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"346c6fa6-dc54-45c4-88a0-7d9c6bc9b64f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:01:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q6cj9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.6,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6cj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.4,StartTime:2023-02-02 15:01:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-02-02 15:01:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.6,ImageID:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,ContainerID:containerd://affcc0e48d95a705355d02848094ad13e555b2c5c69d1532a23dad47364009ef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:01:58.660: INFO: ReplicaSet "test-deployment-6d7ffcf7fb": &ReplicaSet{ObjectMeta:{test-deployment-6d7ffcf7fb deployment-6192 2f8627dc-8eb7-4c8e-9507-8f710882691c 2131 3 2023-02-02 15:01:32 +0000 UTC <nil> <nil> map[pod-template-hash:6d7ffcf7fb test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 45c1e4de-cb2d-4377-8c83-53103392fd27 0xc003b094d7 0xc003b094d8}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:01:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45c1e4de-cb2d-4377-8c83-53103392fd27\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:01:40 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 6d7ffcf7fb,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:6d7ffcf7fb test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003b09560 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:01:58.678: INFO: ReplicaSet "test-deployment-854fdc678": &ReplicaSet{ObjectMeta:{test-deployment-854fdc678 deployment-6192 b0a0f280-d249-4001-8c27-960cdd4a09ee 2353 2 2023-02-02 15:01:41 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 45c1e4de-cb2d-4377-8c83-53103392fd27 0xc003b095c7 0xc003b095c8}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45c1e4de-cb2d-4377-8c83-53103392fd27\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:01:48 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 854fdc678,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003b09650 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:01:58.696: INFO: pod: "test-deployment-854fdc678-b4lbq": &Pod{ObjectMeta:{test-deployment-854fdc678-b4lbq test-deployment-854fdc678- deployment-6192 819294cb-e577-4b95-b82d-25bb549964a2 2220 0 2023-02-02 15:01:41 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-854fdc678 b0a0f280-d249-4001-8c27-960cdd4a09ee 0xc003887c67 0xc003887c68}] [] [{kube-controller-manager Update v1 2023-02-02 15:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0a0f280-d249-4001-8c27-960cdd4a09ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:01:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mbbjx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mbbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.4,StartTime:2023-02-02 15:01:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-02-02 15:01:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://08c6c05782d0be01a250997895df5be8b9bb4f35dbb92f948da29c500fff7bf0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:01:58.697: INFO: pod: "test-deployment-854fdc678-q58dj": &Pod{ObjectMeta:{test-deployment-854fdc678-q58dj test-deployment-854fdc678- deployment-6192 c9e280d7-05b7-4f37-8d8d-e3c3650b44d7 2352 0 2023-02-02 15:01:49 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-854fdc678 b0a0f280-d249-4001-8c27-960cdd4a09ee 0xc003887e47 0xc003887e48}] [] [{kube-controller-manager Update v1 2023-02-02 15:01:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0a0f280-d249-4001-8c27-960cdd4a09ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:01:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m6ktm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6ktm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:01:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.4,StartTime:2023-02-02 15:01:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-02-02 15:01:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://ccb55e7f044ec342c1dc6bae731496f1094cd3283f988f21d2177521a33f4d2b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:01:58.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-6192" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":2,"skipped":61,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:55.786: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-53f639a5-301d-4f55-b79c-0243fa7b7738 �[1mSTEP�[0m: Creating a pod to test consume configMaps Feb 2 15:01:56.057: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ddcc6caa-1852-4b82-8868-f463d97fc2e7" in namespace "projected-3888" to be "Succeeded or Failed" Feb 2 15:01:56.063: INFO: Pod "pod-projected-configmaps-ddcc6caa-1852-4b82-8868-f463d97fc2e7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.708016ms Feb 2 15:01:58.070: INFO: Pod "pod-projected-configmaps-ddcc6caa-1852-4b82-8868-f463d97fc2e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012052873s Feb 2 15:02:00.076: INFO: Pod "pod-projected-configmaps-ddcc6caa-1852-4b82-8868-f463d97fc2e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018535248s �[1mSTEP�[0m: Saw pod success Feb 2 15:02:00.076: INFO: Pod "pod-projected-configmaps-ddcc6caa-1852-4b82-8868-f463d97fc2e7" satisfied condition "Succeeded or Failed" Feb 2 15:02:00.081: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-cnnqas pod pod-projected-configmaps-ddcc6caa-1852-4b82-8868-f463d97fc2e7 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:02:00.107: INFO: Waiting for pod pod-projected-configmaps-ddcc6caa-1852-4b82-8868-f463d97fc2e7 to disappear Feb 2 15:02:00.112: INFO: Pod pod-projected-configmaps-ddcc6caa-1852-4b82-8868-f463d97fc2e7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:00.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3888" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":102,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:00.257: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:02:00.316: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:01.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-5146" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":5,"skipped":151,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:58.759: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:58.816: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption-2 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: listing a collection of PDBs across all namespaces �[1mSTEP�[0m: listing a collection of PDBs in namespace disruption-7045 �[1mSTEP�[0m: deleting a collection of PDBs �[1mSTEP�[0m: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:01.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2-9033" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:01.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-7045" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":3,"skipped":70,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:01.065: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create deployment with httpd image Feb 2 15:02:01.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3844 create -f -' Feb 2 15:02:04.466: INFO: stderr: "" Feb 2 15:02:04.466: INFO: stdout: "deployment.apps/httpd-deployment created\n" �[1mSTEP�[0m: verify diff finds difference between live and declared image Feb 2 15:02:04.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3844 diff -f -' Feb 2 15:02:06.864: INFO: rc: 1 Feb 2 15:02:06.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3844 delete -f -' Feb 2 15:02:07.116: INFO: stderr: "" Feb 2 15:02:07.116: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:07.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3844" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":6,"skipped":157,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:01.124: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:02:01.178: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 2 15:02:01.194: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 2 15:02:06.207: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Feb 2 15:02:10.229: INFO: Creating deployment "test-rolling-update-deployment" Feb 2 15:02:10.236: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 2 15:02:10.263: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 2 15:02:12.282: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 2 15:02:12.295: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Feb 2 15:02:12.326: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9397 96a37aa7-be2f-4a9f-b68c-ae50518f290f 2659 1 2023-02-02 15:02:10 +0000 UTC <nil> <nil> map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-02-02 15:02:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00424ced8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-02-02 15:02:10 +0000 UTC,LastTransitionTime:2023-02-02 15:02:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-8656fc4b57" has successfully progressed.,LastUpdateTime:2023-02-02 15:02:12 +0000 UTC,LastTransitionTime:2023-02-02 15:02:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 2 15:02:12.333: INFO: New ReplicaSet "test-rolling-update-deployment-8656fc4b57" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-8656fc4b57 deployment-9397 4aa20d0f-7014-4c2f-9720-8bbb38602e09 2649 1 2023-02-02 15:02:10 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 96a37aa7-be2f-4a9f-b68c-ae50518f290f 0xc00424d387 0xc00424d388}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:02:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96a37aa7-be2f-4a9f-b68c-ae50518f290f\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:02:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 8656fc4b57,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00424d438 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:02:12.334: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 2 15:02:12.334: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9397 b2ad3089-532d-4207-9438-3c5c35a232c8 2658 2 2023-02-02 15:02:01 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 96a37aa7-be2f-4a9f-b68c-ae50518f290f 0xc00424d25f 0xc00424d270}] [] [{e2e.test Update apps/v1 2023-02-02 15:02:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96a37aa7-be2f-4a9f-b68c-ae50518f290f\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:02:12 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00424d328 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:02:12.341: INFO: Pod "test-rolling-update-deployment-8656fc4b57-29rgx" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-8656fc4b57-29rgx test-rolling-update-deployment-8656fc4b57- deployment-9397 4506dae9-bfc6-41a9-ac8b-44cc13a1004f 2648 0 2023-02-02 15:02:10 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-8656fc4b57 4aa20d0f-7014-4c2f-9720-8bbb38602e09 0xc00424d877 0xc00424d878}] [] [{kube-controller-manager Update v1 2023-02-02 15:02:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4aa20d0f-7014-4c2f-9720-8bbb38602e09\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:02:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.7\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mvr54,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvr54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:02:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:02:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:02:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:02:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.7,StartTime:2023-02-02 15:02:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-02-02 15:02:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://387358711e370e3ba930de387ea4ee4172fed56cc4d5b8c9597b65a6dad6eae4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:12.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9397" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":4,"skipped":71,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:12.447: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: validating api versions Feb 2 15:02:12.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8208 api-versions' Feb 2 15:02:12.703: INFO: stderr: "" Feb 2 15:02:12.703: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:12.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8208" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":5,"skipped":99,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:07.313: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Feb 2 15:02:07.367: INFO: Waiting up to 5m0s for pod "pod-0876d8d8-e6be-474c-b01b-8982621afc29" in namespace "emptydir-183" to be "Succeeded or Failed" Feb 2 15:02:07.380: INFO: Pod "pod-0876d8d8-e6be-474c-b01b-8982621afc29": Phase="Pending", Reason="", readiness=false. Elapsed: 13.385329ms Feb 2 15:02:09.388: INFO: Pod "pod-0876d8d8-e6be-474c-b01b-8982621afc29": Phase="Running", Reason="", readiness=true. Elapsed: 2.021142371s Feb 2 15:02:11.396: INFO: Pod "pod-0876d8d8-e6be-474c-b01b-8982621afc29": Phase="Running", Reason="", readiness=false. Elapsed: 4.029319766s Feb 2 15:02:13.403: INFO: Pod "pod-0876d8d8-e6be-474c-b01b-8982621afc29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036275983s �[1mSTEP�[0m: Saw pod success Feb 2 15:02:13.403: INFO: Pod "pod-0876d8d8-e6be-474c-b01b-8982621afc29" satisfied condition "Succeeded or Failed" Feb 2 15:02:13.409: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod pod-0876d8d8-e6be-474c-b01b-8982621afc29 container test-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:02:13.450: INFO: Waiting for pod pod-0876d8d8-e6be-474c-b01b-8982621afc29 to disappear Feb 2 15:02:13.459: INFO: Pod pod-0876d8d8-e6be-474c-b01b-8982621afc29 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:13.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-183" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":202,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:12.744: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Updating PodDisruptionBudget status �[1mSTEP�[0m: Waiting for all pods to be running Feb 2 15:02:12.834: INFO: running pods: 0 < 1 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Patching PodDisruptionBudget status �[1mSTEP�[0m: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:14.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-802" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":6,"skipped":106,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:13.481: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename e2e-kubelet-etc-hosts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Setting up the test �[1mSTEP�[0m: Creating hostNetwork=false pod Feb 2 15:02:13.545: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:02:15.551: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:02:17.575: INFO: The status of Pod test-pod is Running (Ready = true) �[1mSTEP�[0m: Creating hostNetwork=true pod Feb 2 15:02:17.632: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:02:19.645: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:02:21.639: INFO: The status of Pod test-host-network-pod is Running (Ready = true) �[1mSTEP�[0m: Running the test �[1mSTEP�[0m: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 2 15:02:21.647: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7291 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:02:21.647: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:02:21.648: INFO: ExecWithOptions: Clientset creation Feb 2 15:02:21.648: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7291/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:02:21.882: INFO: Exec stderr: "" Feb 2 15:02:21.882: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7291 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:02:21.883: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:02:21.887: INFO: ExecWithOptions: Clientset creation Feb 2 15:02:21.887: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7291/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:02:22.055: INFO: Exec stderr: "" Feb 2 15:02:22.055: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7291 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:02:22.055: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:02:22.059: INFO: ExecWithOptions: Clientset creation Feb 2 15:02:22.059: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7291/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:02:22.207: INFO: Exec stderr: "" Feb 2 15:02:22.207: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7291 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:02:22.208: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:02:22.209: INFO: ExecWithOptions: Clientset creation Feb 2 15:02:22.209: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7291/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:02:22.347: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 2 15:02:22.348: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7291 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:02:22.348: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:02:22.349: INFO: ExecWithOptions: Clientset creation Feb 2 15:02:22.350: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7291/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:02:22.476: INFO: Exec stderr: "" Feb 2 15:02:22.476: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7291 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:02:22.476: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:02:22.477: INFO: ExecWithOptions: Clientset creation Feb 2 15:02:22.477: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7291/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:02:22.638: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 2 15:02:22.639: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7291 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:02:22.639: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:02:22.640: INFO: ExecWithOptions: Clientset creation Feb 2 15:02:22.640: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7291/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:02:22.855: INFO: Exec stderr: "" Feb 2 15:02:22.856: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7291 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:02:22.856: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:02:22.857: INFO: ExecWithOptions: Clientset creation Feb 2 15:02:22.857: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7291/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:02:22.998: INFO: Exec stderr: "" Feb 2 15:02:22.998: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7291 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:02:22.998: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:02:22.999: INFO: ExecWithOptions: Clientset creation Feb 2 15:02:22.999: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7291/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:02:23.161: INFO: Exec stderr: "" Feb 2 15:02:23.162: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7291 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:02:23.162: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:02:23.163: INFO: ExecWithOptions: Clientset creation Feb 2 15:02:23.163: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7291/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:02:23.294: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:23.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "e2e-kubelet-etc-hosts-7291" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":204,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:23.354: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:23.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-6465" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":9,"skipped":214,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:23.656: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Feb 2 15:02:23.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c13bd5a0-1952-4b00-b341-7b5668a7fa6e" in namespace "projected-1265" to be "Succeeded or Failed" Feb 2 15:02:23.760: INFO: Pod "downwardapi-volume-c13bd5a0-1952-4b00-b341-7b5668a7fa6e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.725299ms Feb 2 15:02:25.767: INFO: Pod "downwardapi-volume-c13bd5a0-1952-4b00-b341-7b5668a7fa6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015869051s Feb 2 15:02:27.775: INFO: Pod "downwardapi-volume-c13bd5a0-1952-4b00-b341-7b5668a7fa6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02400951s �[1mSTEP�[0m: Saw pod success Feb 2 15:02:27.775: INFO: Pod "downwardapi-volume-c13bd5a0-1952-4b00-b341-7b5668a7fa6e" satisfied condition "Succeeded or Failed" Feb 2 15:02:27.782: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod downwardapi-volume-c13bd5a0-1952-4b00-b341-7b5668a7fa6e container client-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:02:27.811: INFO: Waiting for pod downwardapi-volume-c13bd5a0-1952-4b00-b341-7b5668a7fa6e to disappear Feb 2 15:02:27.815: INFO: Pod downwardapi-volume-c13bd5a0-1952-4b00-b341-7b5668a7fa6e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:27.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1265" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":266,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:28.012: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:02:28.050: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Feb 2 15:02:28.073: INFO: The status of Pod pod-logs-websocket-896b30ac-d95c-4f1d-9d1b-4b24cd6b7683 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:02:30.087: INFO: The status of Pod pod-logs-websocket-896b30ac-d95c-4f1d-9d1b-4b24cd6b7683 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:02:32.079: INFO: The status of Pod pod-logs-websocket-896b30ac-d95c-4f1d-9d1b-4b24cd6b7683 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:32.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-3424" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":334,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:14.944: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Feb 2 15:02:14.998: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: rename a version �[1mSTEP�[0m: check the new version name is served �[1mSTEP�[0m: check the old version name is removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:37.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-6098" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":7,"skipped":118,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:37.658: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:02:38.264: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:02:41.285: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be denied by the webhook �[1mSTEP�[0m: create a pod that causes the webhook to hang �[1mSTEP�[0m: create a configmap that should be denied by the webhook �[1mSTEP�[0m: create a configmap that should be admitted by the webhook �[1mSTEP�[0m: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: create a namespace that bypass the webhook �[1mSTEP�[0m: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:51.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7281" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7281-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":8,"skipped":145,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:51.551: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:02:52.184: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:02:55.205: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a validating webhook configuration �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Updating a validating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Patching a validating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:02:55.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5668" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5668-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":9,"skipped":187,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:55.320: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Service �[1mSTEP�[0m: Creating a NodePort Service �[1mSTEP�[0m: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota �[1mSTEP�[0m: Ensuring resource quota status captures service creation �[1mSTEP�[0m: Deleting Services �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:03:06.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-1506" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":10,"skipped":188,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:50.406: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-de2e26a1-a8b0-420f-ae3b-e22903a62dcc �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-8d42736b-d3c2-4810-940a-820a4e12cb11 �[1mSTEP�[0m: Creating the pod Feb 2 15:01:50.542: INFO: The status of Pod pod-projected-secrets-cd6a48bd-5e7a-4001-bf4f-7cbcdaefe3be is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:01:52.549: INFO: The status of Pod pod-projected-secrets-cd6a48bd-5e7a-4001-bf4f-7cbcdaefe3be is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:01:54.551: INFO: The status of Pod pod-projected-secrets-cd6a48bd-5e7a-4001-bf4f-7cbcdaefe3be is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:01:56.557: INFO: The status of Pod pod-projected-secrets-cd6a48bd-5e7a-4001-bf4f-7cbcdaefe3be is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:01:58.547: INFO: The status of Pod pod-projected-secrets-cd6a48bd-5e7a-4001-bf4f-7cbcdaefe3be is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-de2e26a1-a8b0-420f-ae3b-e22903a62dcc �[1mSTEP�[0m: Updating secret s-test-opt-upd-8d42736b-d3c2-4810-940a-820a4e12cb11 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-234c82fb-7477-4435-9e3d-1114422cee45 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:03:21.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-722" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":152,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:03:21.302: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-253f9798-8629-4b74-9d32-b042e7dd3198 �[1mSTEP�[0m: Creating a pod to test consume configMaps Feb 2 15:03:21.333: INFO: Waiting up to 5m0s for pod "pod-configmaps-26713a35-bbe2-4d0c-a338-aa7a5eea1421" in namespace "configmap-6852" to be "Succeeded or Failed" Feb 2 15:03:21.336: INFO: Pod "pod-configmaps-26713a35-bbe2-4d0c-a338-aa7a5eea1421": Phase="Pending", Reason="", readiness=false. Elapsed: 3.158701ms Feb 2 15:03:23.340: INFO: Pod "pod-configmaps-26713a35-bbe2-4d0c-a338-aa7a5eea1421": Phase="Running", Reason="", readiness=false. Elapsed: 2.007301282s Feb 2 15:03:25.345: INFO: Pod "pod-configmaps-26713a35-bbe2-4d0c-a338-aa7a5eea1421": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011886827s �[1mSTEP�[0m: Saw pod success Feb 2 15:03:25.345: INFO: Pod "pod-configmaps-26713a35-bbe2-4d0c-a338-aa7a5eea1421" satisfied condition "Succeeded or Failed" Feb 2 15:03:25.348: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod pod-configmaps-26713a35-bbe2-4d0c-a338-aa7a5eea1421 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:03:25.363: INFO: Waiting for pod pod-configmaps-26713a35-bbe2-4d0c-a338-aa7a5eea1421 to disappear Feb 2 15:03:25.365: INFO: Pod pod-configmaps-26713a35-bbe2-4d0c-a338-aa7a5eea1421 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:03:25.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6852" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":222,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:02:32.134: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod test-webserver-37dba43d-13f6-42dc-816d-a79bbfe72ded in namespace container-probe-9755 Feb 2 15:02:34.209: INFO: Started pod test-webserver-37dba43d-13f6-42dc-816d-a79bbfe72ded in namespace container-probe-9755 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Feb 2 15:02:34.217: INFO: Initial restart count of pod test-webserver-37dba43d-13f6-42dc-816d-a79bbfe72ded is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:06:34.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-9755" for this suite. �[32m• [SLOW TEST:242.748 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":337,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:06:34.899: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Feb 2 15:06:38.951: INFO: Expected: &{} to match Container's Termination Message: -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:06:38.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-1437" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":348,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:01:31.779: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob W0202 15:01:31.842608 15 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Feb 2 15:01:31.842: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ForbidConcurrent cronjob �[1mSTEP�[0m: Ensuring a job is scheduled �[1mSTEP�[0m: Ensuring exactly one is scheduled �[1mSTEP�[0m: Ensuring exactly one running job exists by listing jobs explicitly �[1mSTEP�[0m: Ensuring no more jobs are scheduled �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:07:01.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-6155" for this suite. �[32m• [SLOW TEST:330.179 seconds]�[0m [sig-apps] CronJob �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:07:01.981: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Service �[1mSTEP�[0m: watching for the Service to be added Feb 2 15:07:02.041: INFO: Found Service test-service-zzcgt in namespace services-3872 with labels: map[test-service-static:true] & ports [{http TCP <nil> 80 {0 80 } 0}] Feb 2 15:07:02.041: INFO: Service test-service-zzcgt created �[1mSTEP�[0m: Getting /status Feb 2 15:07:02.057: INFO: Service test-service-zzcgt has LoadBalancer: {[]} �[1mSTEP�[0m: patching the ServiceStatus �[1mSTEP�[0m: watching for the Service to be patched Feb 2 15:07:02.072: INFO: observed Service test-service-zzcgt in namespace services-3872 with annotations: map[] & LoadBalancer: {[]} Feb 2 15:07:02.072: INFO: Found Service test-service-zzcgt in namespace services-3872 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Feb 2 15:07:02.072: INFO: Service test-service-zzcgt has service status patched �[1mSTEP�[0m: updating the ServiceStatus Feb 2 15:07:02.085: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Service to be updated Feb 2 15:07:02.093: INFO: Observed Service test-service-zzcgt in namespace services-3872 with annotations: map[] & Conditions: {[]} Feb 2 15:07:02.093: INFO: Observed event: &Service{ObjectMeta:{test-service-zzcgt services-3872 56a192b4-6928-432c-892f-f1c15d0f881d 3778 0 2023-02-02 15:07:02 +0000 UTC <nil> <nil> map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-02-02 15:07:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-02-02 15:07:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.130.117.254,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.130.117.254],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Feb 2 15:07:02.093: INFO: Found Service test-service-zzcgt in namespace services-3872 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Feb 2 15:07:02.093: INFO: Service test-service-zzcgt has service status updated �[1mSTEP�[0m: patching the service �[1mSTEP�[0m: watching for the Service to be patched Feb 2 15:07:02.122: INFO: observed Service test-service-zzcgt in namespace services-3872 with labels: map[test-service-static:true] Feb 2 15:07:02.122: INFO: observed Service test-service-zzcgt in namespace services-3872 with labels: map[test-service-static:true] Feb 2 15:07:02.122: INFO: observed Service test-service-zzcgt in namespace services-3872 with labels: map[test-service-static:true] Feb 2 15:07:02.122: INFO: Found Service test-service-zzcgt in namespace services-3872 with labels: map[test-service:patched test-service-static:true] Feb 2 15:07:02.122: INFO: Service test-service-zzcgt patched �[1mSTEP�[0m: deleting the service �[1mSTEP�[0m: watching for the Service to be deleted Feb 2 15:07:02.143: INFO: Observed event: ADDED Feb 2 15:07:02.143: INFO: Observed event: MODIFIED Feb 2 15:07:02.143: INFO: Observed event: MODIFIED Feb 2 15:07:02.143: INFO: Observed event: MODIFIED Feb 2 15:07:02.144: INFO: Found Service test-service-zzcgt in namespace services-3872 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Feb 2 15:07:02.144: INFO: Service test-service-zzcgt deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:07:02.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3872" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:06:38.983: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-vm57 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Feb 2 15:06:39.017: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vm57" in namespace "subpath-4169" to be "Succeeded or Failed" Feb 2 15:06:39.021: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.888772ms Feb 2 15:06:41.024: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=true. Elapsed: 2.006593154s Feb 2 15:06:43.029: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=true. Elapsed: 4.01127552s Feb 2 15:06:45.034: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=true. Elapsed: 6.015981154s Feb 2 15:06:47.038: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=true. Elapsed: 8.020767778s Feb 2 15:06:49.043: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=true. Elapsed: 10.02563774s Feb 2 15:06:51.047: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=true. Elapsed: 12.029728862s Feb 2 15:06:53.052: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=true. Elapsed: 14.034490635s Feb 2 15:06:55.056: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=true. Elapsed: 16.038712751s Feb 2 15:06:57.061: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=true. Elapsed: 18.043282148s Feb 2 15:06:59.065: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=true. Elapsed: 20.046880919s Feb 2 15:07:01.069: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Running", Reason="", readiness=false. Elapsed: 22.051152939s Feb 2 15:07:03.074: INFO: Pod "pod-subpath-test-configmap-vm57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056680251s �[1mSTEP�[0m: Saw pod success Feb 2 15:07:03.074: INFO: Pod "pod-subpath-test-configmap-vm57" satisfied condition "Succeeded or Failed" Feb 2 15:07:03.077: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-subpath-test-configmap-vm57 container test-container-subpath-configmap-vm57: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:07:03.104: INFO: Waiting for pod pod-subpath-test-configmap-vm57 to disappear Feb 2 15:07:03.107: INFO: Pod pod-subpath-test-configmap-vm57 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-vm57 Feb 2 15:07:03.107: INFO: Deleting pod "pod-subpath-test-configmap-vm57" in namespace "subpath-4169" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:07:03.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4169" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":14,"skipped":356,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:07:02.220: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Feb 2 15:07:06.281: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:07:06.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-939" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:07:03.194: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-baeeada5-8cff-4184-b072-664648b66312 �[1mSTEP�[0m: Creating a pod to test consume secrets Feb 2 15:07:03.233: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f603973-f6a4-4e5b-8ad3-7a1af511a9e8" in namespace "projected-1546" to be "Succeeded or Failed" Feb 2 15:07:03.238: INFO: Pod "pod-projected-secrets-9f603973-f6a4-4e5b-8ad3-7a1af511a9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.940742ms Feb 2 15:07:05.243: INFO: Pod "pod-projected-secrets-9f603973-f6a4-4e5b-8ad3-7a1af511a9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009886165s Feb 2 15:07:07.246: INFO: Pod "pod-projected-secrets-9f603973-f6a4-4e5b-8ad3-7a1af511a9e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013348285s �[1mSTEP�[0m: Saw pod success Feb 2 15:07:07.246: INFO: Pod "pod-projected-secrets-9f603973-f6a4-4e5b-8ad3-7a1af511a9e8" satisfied condition "Succeeded or Failed" Feb 2 15:07:07.249: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-projected-secrets-9f603973-f6a4-4e5b-8ad3-7a1af511a9e8 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:07:07.264: INFO: Waiting for pod pod-projected-secrets-9f603973-f6a4-4e5b-8ad3-7a1af511a9e8 to disappear Feb 2 15:07:07.268: INFO: Pod pod-projected-secrets-9f603973-f6a4-4e5b-8ad3-7a1af511a9e8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:07:07.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1546" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":410,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:07:07.309: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-d9cd5eef-8251-4436-95b8-f93cfa4d7139 �[1mSTEP�[0m: Creating a pod to test consume configMaps Feb 2 15:07:07.342: INFO: Waiting up to 5m0s for pod "pod-configmaps-233a5052-c785-4147-bca2-376c412bf6b3" in namespace "configmap-6735" to be "Succeeded or Failed" Feb 2 15:07:07.345: INFO: Pod "pod-configmaps-233a5052-c785-4147-bca2-376c412bf6b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.190604ms Feb 2 15:07:09.350: INFO: Pod "pod-configmaps-233a5052-c785-4147-bca2-376c412bf6b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007472802s Feb 2 15:07:11.353: INFO: Pod "pod-configmaps-233a5052-c785-4147-bca2-376c412bf6b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010500437s �[1mSTEP�[0m: Saw pod success Feb 2 15:07:11.353: INFO: Pod "pod-configmaps-233a5052-c785-4147-bca2-376c412bf6b3" satisfied condition "Succeeded or Failed" Feb 2 15:07:11.355: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod pod-configmaps-233a5052-c785-4147-bca2-376c412bf6b3 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:07:11.374: INFO: Waiting for pod pod-configmaps-233a5052-c785-4147-bca2-376c412bf6b3 to disappear Feb 2 15:07:11.377: INFO: Pod pod-configmaps-233a5052-c785-4147-bca2-376c412bf6b3 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:07:11.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6735" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":427,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:07:06.308: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:07:06.824: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:07:09.845: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod �[1mSTEP�[0m: 'kubectl attach' the pod, should be denied by the webhook Feb 2 15:07:11.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=webhook-5417 attach --namespace=webhook-5417 to-be-attached-pod -i -c=container1' Feb 2 15:07:11.969: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:07:11.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5417" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5417-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":4,"skipped":49,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:07:12.060: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: delete a job �[1mSTEP�[0m: deleting Job.batch foo in namespace job-6531, will wait for the garbage collector to delete the pods Feb 2 15:07:14.187: INFO: Deleting Job.batch foo took: 5.187126ms Feb 2 15:07:14.287: INFO: Terminating Job.batch foo pods took: 100.583675ms �[1mSTEP�[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:07:46.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-6531" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":5,"skipped":52,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:07:47.007: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:07:47.662: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:07:50.683: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:07:50.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2708" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2708-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":6,"skipped":54,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:07:50.899: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: starting an echo server on multiple ports �[1mSTEP�[0m: creating replication controller proxy-service-b6rjb in namespace proxy-8528 I0202 15:07:50.945874 15 runners.go:193] Created replication controller with name: proxy-service-b6rjb, namespace: proxy-8528, replica count: 1 I0202 15:07:51.997210 15 runners.go:193] proxy-service-b6rjb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0202 15:07:52.997455 15 runners.go:193] proxy-service-b6rjb Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 15:07:53.000: INFO: setup took 2.076901076s, starting test cases �[1mSTEP�[0m: running 16 cases, 20 attempts per case, 320 total attempts Feb 2 15:07:53.007: INFO: (0) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 6.078948ms) Feb 2 15:07:53.008: INFO: (0) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 7.543778ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 11.107297ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 11.186233ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 11.229221ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 11.174847ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 11.164258ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 11.252166ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 11.306292ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 11.370895ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 11.490244ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 11.508795ms) Feb 2 15:07:53.012: INFO: (0) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 11.364175ms) Feb 2 15:07:53.014: INFO: (0) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 13.637743ms) Feb 2 15:07:53.014: INFO: (0) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 13.727193ms) Feb 2 15:07:53.014: INFO: (0) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 14.033382ms) Feb 2 15:07:53.022: INFO: (1) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.284172ms) Feb 2 15:07:53.022: INFO: (1) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 7.310735ms) Feb 2 15:07:53.022: INFO: (1) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 7.230655ms) Feb 2 15:07:53.022: INFO: (1) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 7.511949ms) Feb 2 15:07:53.022: INFO: (1) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 7.858273ms) Feb 2 15:07:53.023: INFO: (1) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 8.543482ms) Feb 2 15:07:53.024: INFO: (1) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 8.49403ms) Feb 2 15:07:53.024: INFO: (1) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 8.529659ms) Feb 2 15:07:53.024: INFO: (1) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 8.795126ms) Feb 2 15:07:53.024: INFO: (1) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 8.737769ms) Feb 2 15:07:53.024: INFO: (1) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 8.613931ms) Feb 2 15:07:53.024: INFO: (1) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 8.97922ms) Feb 2 15:07:53.024: INFO: (1) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 9.096003ms) Feb 2 15:07:53.024: INFO: (1) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 8.658873ms) Feb 2 15:07:53.024: INFO: (1) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 8.897819ms) Feb 2 15:07:53.024: INFO: (1) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 9.015435ms) Feb 2 15:07:53.027: INFO: (2) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 3.247017ms) Feb 2 15:07:53.029: INFO: (2) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 4.981313ms) Feb 2 15:07:53.029: INFO: (2) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 5.252693ms) Feb 2 15:07:53.029: INFO: (2) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 5.329451ms) Feb 2 15:07:53.029: INFO: (2) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 5.61321ms) Feb 2 15:07:53.031: INFO: (2) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 7.130775ms) Feb 2 15:07:53.032: INFO: (2) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 8.510121ms) Feb 2 15:07:53.032: INFO: (2) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 8.40405ms) Feb 2 15:07:53.032: INFO: (2) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 8.430692ms) Feb 2 15:07:53.032: INFO: (2) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 8.606778ms) Feb 2 15:07:53.032: INFO: (2) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 8.418376ms) Feb 2 15:07:53.032: INFO: (2) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 8.557583ms) Feb 2 15:07:53.032: INFO: (2) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 8.457337ms) Feb 2 15:07:53.032: INFO: (2) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 8.51459ms) Feb 2 15:07:53.033: INFO: (2) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 8.99825ms) Feb 2 15:07:53.033: INFO: (2) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 9.444198ms) Feb 2 15:07:53.039: INFO: (3) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 5.768111ms) Feb 2 15:07:53.041: INFO: (3) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 6.842193ms) Feb 2 15:07:53.041: INFO: (3) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 6.755013ms) Feb 2 15:07:53.041: INFO: (3) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 7.096132ms) Feb 2 15:07:53.041: INFO: (3) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 6.954949ms) Feb 2 15:07:53.041: INFO: (3) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 7.03409ms) Feb 2 15:07:53.041: INFO: (3) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.061135ms) Feb 2 15:07:53.041: INFO: (3) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 7.354859ms) Feb 2 15:07:53.041: INFO: (3) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 7.964856ms) Feb 2 15:07:53.042: INFO: (3) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 8.408727ms) Feb 2 15:07:53.042: INFO: (3) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 8.436332ms) Feb 2 15:07:53.042: INFO: (3) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 8.29788ms) Feb 2 15:07:53.043: INFO: (3) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 9.049291ms) Feb 2 15:07:53.043: INFO: (3) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 8.967173ms) Feb 2 15:07:53.043: INFO: (3) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 8.935411ms) Feb 2 15:07:53.044: INFO: (3) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 11.027089ms) Feb 2 15:07:53.051: INFO: (4) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 5.761612ms) Feb 2 15:07:53.051: INFO: (4) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 6.333189ms) Feb 2 15:07:53.051: INFO: (4) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 6.19398ms) Feb 2 15:07:53.051: INFO: (4) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 6.029685ms) Feb 2 15:07:53.052: INFO: (4) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.079928ms) Feb 2 15:07:53.052: INFO: (4) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 7.444475ms) Feb 2 15:07:53.052: INFO: (4) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 6.772383ms) Feb 2 15:07:53.052: INFO: (4) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 6.869703ms) Feb 2 15:07:53.052: INFO: (4) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 6.954968ms) Feb 2 15:07:53.052: INFO: (4) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 7.138619ms) Feb 2 15:07:53.053: INFO: (4) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 8.750067ms) Feb 2 15:07:53.054: INFO: (4) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 8.507777ms) Feb 2 15:07:53.054: INFO: (4) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 8.678462ms) Feb 2 15:07:53.054: INFO: (4) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 8.403391ms) Feb 2 15:07:53.054: INFO: (4) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 8.702725ms) Feb 2 15:07:53.054: INFO: (4) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 8.927445ms) Feb 2 15:07:53.059: INFO: (5) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 5.497347ms) Feb 2 15:07:53.059: INFO: (5) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 4.888444ms) Feb 2 15:07:53.060: INFO: (5) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 5.727263ms) Feb 2 15:07:53.061: INFO: (5) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 6.185733ms) Feb 2 15:07:53.061: INFO: (5) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 6.700324ms) Feb 2 15:07:53.061: INFO: (5) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 6.180555ms) Feb 2 15:07:53.061: INFO: (5) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 6.684915ms) Feb 2 15:07:53.061: INFO: (5) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 7.018571ms) Feb 2 15:07:53.061: INFO: (5) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 6.577118ms) Feb 2 15:07:53.061: INFO: (5) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 6.810686ms) Feb 2 15:07:53.062: INFO: (5) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 8.257033ms) Feb 2 15:07:53.063: INFO: (5) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 8.183125ms) Feb 2 15:07:53.063: INFO: (5) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 8.572809ms) Feb 2 15:07:53.063: INFO: (5) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 8.32081ms) Feb 2 15:07:53.063: INFO: (5) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 8.924092ms) Feb 2 15:07:53.063: INFO: (5) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 8.395596ms) Feb 2 15:07:53.071: INFO: (6) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 8.132021ms) Feb 2 15:07:53.071: INFO: (6) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 8.347211ms) Feb 2 15:07:53.072: INFO: (6) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 9.03911ms) Feb 2 15:07:53.072: INFO: (6) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 9.132566ms) Feb 2 15:07:53.072: INFO: (6) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 9.211888ms) Feb 2 15:07:53.072: INFO: (6) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 9.366929ms) Feb 2 15:07:53.073: INFO: (6) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 10.430408ms) Feb 2 15:07:53.073: INFO: (6) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 10.289999ms) Feb 2 15:07:53.073: INFO: (6) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 10.362065ms) Feb 2 15:07:53.073: INFO: (6) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 10.539155ms) Feb 2 15:07:53.073: INFO: (6) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 10.56381ms) Feb 2 15:07:53.073: INFO: (6) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 10.43591ms) Feb 2 15:07:53.074: INFO: (6) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 10.520592ms) Feb 2 15:07:53.074: INFO: (6) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 10.59528ms) Feb 2 15:07:53.074: INFO: (6) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 10.641273ms) Feb 2 15:07:53.074: INFO: (6) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 10.714773ms) Feb 2 15:07:53.081: INFO: (7) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 7.059021ms) Feb 2 15:07:53.081: INFO: (7) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.301316ms) Feb 2 15:07:53.081: INFO: (7) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 7.078445ms) Feb 2 15:07:53.081: INFO: (7) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.156755ms) Feb 2 15:07:53.082: INFO: (7) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 7.31252ms) Feb 2 15:07:53.082: INFO: (7) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 7.915591ms) Feb 2 15:07:53.083: INFO: (7) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 9.001685ms) Feb 2 15:07:53.084: INFO: (7) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 9.331199ms) Feb 2 15:07:53.084: INFO: (7) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 9.625152ms) Feb 2 15:07:53.084: INFO: (7) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 9.505585ms) Feb 2 15:07:53.084: INFO: (7) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 9.407367ms) Feb 2 15:07:53.084: INFO: (7) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 9.578893ms) Feb 2 15:07:53.084: INFO: (7) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 9.41084ms) Feb 2 15:07:53.084: INFO: (7) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 9.787699ms) Feb 2 15:07:53.085: INFO: (7) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 10.505397ms) Feb 2 15:07:53.085: INFO: (7) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 10.451156ms) Feb 2 15:07:53.090: INFO: (8) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 4.4986ms) Feb 2 15:07:53.090: INFO: (8) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 4.559605ms) Feb 2 15:07:53.090: INFO: (8) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 5.209896ms) Feb 2 15:07:53.091: INFO: (8) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 5.434072ms) Feb 2 15:07:53.091: INFO: (8) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 5.469711ms) Feb 2 15:07:53.091: INFO: (8) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 6.071742ms) Feb 2 15:07:53.092: INFO: (8) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 6.81452ms) Feb 2 15:07:53.092: INFO: (8) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 6.978989ms) Feb 2 15:07:53.093: INFO: (8) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 7.635872ms) Feb 2 15:07:53.093: INFO: (8) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 7.577561ms) Feb 2 15:07:53.093: INFO: (8) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 7.548809ms) Feb 2 15:07:53.093: INFO: (8) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 7.998953ms) Feb 2 15:07:53.093: INFO: (8) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 8.142243ms) Feb 2 15:07:53.093: INFO: (8) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 8.290332ms) Feb 2 15:07:53.093: INFO: (8) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 8.188531ms) Feb 2 15:07:53.093: INFO: (8) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 8.420865ms) Feb 2 15:07:53.098: INFO: (9) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 4.337954ms) Feb 2 15:07:53.098: INFO: (9) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 4.095482ms) Feb 2 15:07:53.099: INFO: (9) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 4.885707ms) Feb 2 15:07:53.099: INFO: (9) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 4.711451ms) Feb 2 15:07:53.099: INFO: (9) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 5.666308ms) Feb 2 15:07:53.100: INFO: (9) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 5.095011ms) Feb 2 15:07:53.100: INFO: (9) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 6.012085ms) Feb 2 15:07:53.100: INFO: (9) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 5.840428ms) Feb 2 15:07:53.100: INFO: (9) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 5.382544ms) Feb 2 15:07:53.100: INFO: (9) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 6.566934ms) Feb 2 15:07:53.100: INFO: (9) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 6.002727ms) Feb 2 15:07:53.100: INFO: (9) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 6.420272ms) Feb 2 15:07:53.101: INFO: (9) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.191331ms) Feb 2 15:07:53.101: INFO: (9) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 6.688228ms) Feb 2 15:07:53.101: INFO: (9) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 7.034822ms) Feb 2 15:07:53.102: INFO: (9) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 6.952772ms) Feb 2 15:07:53.111: INFO: (10) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 9.684907ms) Feb 2 15:07:53.111: INFO: (10) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 9.541077ms) Feb 2 15:07:53.112: INFO: (10) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 10.464257ms) Feb 2 15:07:53.112: INFO: (10) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 10.6155ms) Feb 2 15:07:53.112: INFO: (10) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 10.697523ms) Feb 2 15:07:53.114: INFO: (10) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 11.817921ms) Feb 2 15:07:53.114: INFO: (10) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 12.073281ms) Feb 2 15:07:53.114: INFO: (10) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 11.944325ms) Feb 2 15:07:53.114: INFO: (10) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 11.96587ms) Feb 2 15:07:53.114: INFO: (10) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 12.023656ms) Feb 2 15:07:53.118: INFO: (10) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 15.908419ms) Feb 2 15:07:53.118: INFO: (10) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 16.432084ms) Feb 2 15:07:53.118: INFO: (10) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 16.749165ms) Feb 2 15:07:53.119: INFO: (10) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 17.268106ms) Feb 2 15:07:53.119: INFO: (10) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 17.537761ms) Feb 2 15:07:53.120: INFO: (10) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 17.936513ms) Feb 2 15:07:53.124: INFO: (11) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 3.931049ms) Feb 2 15:07:53.124: INFO: (11) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 3.861653ms) Feb 2 15:07:53.124: INFO: (11) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 3.958802ms) Feb 2 15:07:53.127: INFO: (11) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.143505ms) Feb 2 15:07:53.127: INFO: (11) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 7.532107ms) Feb 2 15:07:53.128: INFO: (11) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 8.107902ms) Feb 2 15:07:53.128: INFO: (11) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 7.934954ms) Feb 2 15:07:53.128: INFO: (11) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 8.407898ms) Feb 2 15:07:53.128: INFO: (11) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 8.472409ms) Feb 2 15:07:53.128: INFO: (11) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 8.582336ms) Feb 2 15:07:53.128: INFO: (11) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 8.529878ms) Feb 2 15:07:53.128: INFO: (11) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 8.663774ms) Feb 2 15:07:53.128: INFO: (11) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 8.588156ms) Feb 2 15:07:53.128: INFO: (11) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 8.548828ms) Feb 2 15:07:53.129: INFO: (11) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 8.78272ms) Feb 2 15:07:53.129: INFO: (11) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 8.825458ms) Feb 2 15:07:53.133: INFO: (12) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 4.074157ms) Feb 2 15:07:53.135: INFO: (12) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 5.581099ms) Feb 2 15:07:53.136: INFO: (12) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 7.110795ms) Feb 2 15:07:53.136: INFO: (12) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 6.911883ms) Feb 2 15:07:53.136: INFO: (12) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 6.692273ms) Feb 2 15:07:53.136: INFO: (12) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 7.046755ms) Feb 2 15:07:53.137: INFO: (12) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.288458ms) Feb 2 15:07:53.137: INFO: (12) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 7.852401ms) Feb 2 15:07:53.137: INFO: (12) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 7.812458ms) Feb 2 15:07:53.137: INFO: (12) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 7.360622ms) Feb 2 15:07:53.137: INFO: (12) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 8.021319ms) Feb 2 15:07:53.137: INFO: (12) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 7.61214ms) Feb 2 15:07:53.137: INFO: (12) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 7.68617ms) Feb 2 15:07:53.137: INFO: (12) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.790103ms) Feb 2 15:07:53.137: INFO: (12) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 7.866771ms) Feb 2 15:07:53.137: INFO: (12) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 7.939351ms) Feb 2 15:07:53.140: INFO: (13) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 3.001727ms) Feb 2 15:07:53.144: INFO: (13) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 5.64776ms) Feb 2 15:07:53.144: INFO: (13) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 5.775949ms) Feb 2 15:07:53.144: INFO: (13) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 5.822296ms) Feb 2 15:07:53.144: INFO: (13) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 5.679015ms) Feb 2 15:07:53.144: INFO: (13) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 6.389762ms) Feb 2 15:07:53.144: INFO: (13) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 6.076555ms) Feb 2 15:07:53.144: INFO: (13) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 6.38403ms) Feb 2 15:07:53.145: INFO: (13) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.260801ms) Feb 2 15:07:53.146: INFO: (13) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 7.686801ms) Feb 2 15:07:53.147: INFO: (13) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 9.411486ms) Feb 2 15:07:53.147: INFO: (13) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 9.4748ms) Feb 2 15:07:53.147: INFO: (13) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 9.4191ms) Feb 2 15:07:53.148: INFO: (13) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 9.686506ms) Feb 2 15:07:53.148: INFO: (13) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 10.16688ms) Feb 2 15:07:53.148: INFO: (13) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 9.809766ms) Feb 2 15:07:53.150: INFO: (14) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 2.418931ms) Feb 2 15:07:53.154: INFO: (14) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 6.335972ms) Feb 2 15:07:53.155: INFO: (14) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 6.862559ms) Feb 2 15:07:53.155: INFO: (14) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 6.785319ms) Feb 2 15:07:53.155: INFO: (14) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 6.844503ms) Feb 2 15:07:53.155: INFO: (14) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 6.896749ms) Feb 2 15:07:53.155: INFO: (14) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 7.072276ms) Feb 2 15:07:53.155: INFO: (14) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 7.212221ms) Feb 2 15:07:53.155: INFO: (14) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 7.265461ms) Feb 2 15:07:53.155: INFO: (14) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 7.344575ms) Feb 2 15:07:53.155: INFO: (14) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 7.215733ms) Feb 2 15:07:53.156: INFO: (14) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 7.718595ms) Feb 2 15:07:53.157: INFO: (14) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 8.214541ms) Feb 2 15:07:53.157: INFO: (14) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 8.260805ms) Feb 2 15:07:53.157: INFO: (14) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 8.631274ms) Feb 2 15:07:53.157: INFO: (14) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 8.418207ms) Feb 2 15:07:53.160: INFO: (15) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 3.221381ms) Feb 2 15:07:53.161: INFO: (15) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 4.371656ms) Feb 2 15:07:53.162: INFO: (15) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 5.278205ms) Feb 2 15:07:53.162: INFO: (15) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 5.375368ms) Feb 2 15:07:53.163: INFO: (15) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 5.691736ms) Feb 2 15:07:53.163: INFO: (15) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 5.881483ms) Feb 2 15:07:53.163: INFO: (15) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 5.837066ms) Feb 2 15:07:53.163: INFO: (15) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 6.106298ms) Feb 2 15:07:53.163: INFO: (15) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 6.29612ms) Feb 2 15:07:53.164: INFO: (15) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 6.535035ms) Feb 2 15:07:53.165: INFO: (15) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 8.082352ms) Feb 2 15:07:53.165: INFO: (15) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 8.254638ms) Feb 2 15:07:53.165: INFO: (15) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 8.03757ms) Feb 2 15:07:53.165: INFO: (15) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 8.379139ms) Feb 2 15:07:53.165: INFO: (15) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 8.44485ms) Feb 2 15:07:53.165: INFO: (15) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 8.395264ms) Feb 2 15:07:53.173: INFO: (16) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.714684ms) Feb 2 15:07:53.173: INFO: (16) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 7.717467ms) Feb 2 15:07:53.173: INFO: (16) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 7.715108ms) Feb 2 15:07:53.173: INFO: (16) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 7.800125ms) Feb 2 15:07:53.173: INFO: (16) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 7.814029ms) Feb 2 15:07:53.174: INFO: (16) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 7.952697ms) Feb 2 15:07:53.174: INFO: (16) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 8.52913ms) Feb 2 15:07:53.175: INFO: (16) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 9.615826ms) Feb 2 15:07:53.175: INFO: (16) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 9.864382ms) Feb 2 15:07:53.175: INFO: (16) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 9.919173ms) Feb 2 15:07:53.175: INFO: (16) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 9.794852ms) Feb 2 15:07:53.176: INFO: (16) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 9.913719ms) Feb 2 15:07:53.176: INFO: (16) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 9.972794ms) Feb 2 15:07:53.176: INFO: (16) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 10.056731ms) Feb 2 15:07:53.176: INFO: (16) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 10.015545ms) Feb 2 15:07:53.177: INFO: (16) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 11.147301ms) Feb 2 15:07:53.182: INFO: (17) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 4.966332ms) Feb 2 15:07:53.182: INFO: (17) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 4.817527ms) Feb 2 15:07:53.182: INFO: (17) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 4.835435ms) Feb 2 15:07:53.182: INFO: (17) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 5.394877ms) Feb 2 15:07:53.183: INFO: (17) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 5.687601ms) Feb 2 15:07:53.183: INFO: (17) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 5.749389ms) Feb 2 15:07:53.183: INFO: (17) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 6.061968ms) Feb 2 15:07:53.183: INFO: (17) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 6.521298ms) Feb 2 15:07:53.185: INFO: (17) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 7.794884ms) Feb 2 15:07:53.185: INFO: (17) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 8.162491ms) Feb 2 15:07:53.185: INFO: (17) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 8.106782ms) Feb 2 15:07:53.186: INFO: (17) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 9.179883ms) Feb 2 15:07:53.186: INFO: (17) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 9.474911ms) Feb 2 15:07:53.188: INFO: (17) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 10.788757ms) Feb 2 15:07:53.188: INFO: (17) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 10.869358ms) Feb 2 15:07:53.188: INFO: (17) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 10.676372ms) Feb 2 15:07:53.196: INFO: (18) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 7.970778ms) Feb 2 15:07:53.196: INFO: (18) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 7.703573ms) Feb 2 15:07:53.196: INFO: (18) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 7.593529ms) Feb 2 15:07:53.196: INFO: (18) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 7.468459ms) Feb 2 15:07:53.196: INFO: (18) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 8.22469ms) Feb 2 15:07:53.196: INFO: (18) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 8.129654ms) Feb 2 15:07:53.196: INFO: (18) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 7.608103ms) Feb 2 15:07:53.197: INFO: (18) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 8.478741ms) Feb 2 15:07:53.197: INFO: (18) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 8.706132ms) Feb 2 15:07:53.198: INFO: (18) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 9.497723ms) Feb 2 15:07:53.198: INFO: (18) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 9.834773ms) Feb 2 15:07:53.202: INFO: (18) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 13.050519ms) Feb 2 15:07:53.202: INFO: (18) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 13.525926ms) Feb 2 15:07:53.204: INFO: (18) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 15.444843ms) Feb 2 15:07:53.205: INFO: (18) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 16.098314ms) Feb 2 15:07:53.206: INFO: (18) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 17.022786ms) Feb 2 15:07:53.223: INFO: (19) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:460/proxy/: tls baz (200; 17.393256ms) Feb 2 15:07:53.224: INFO: (19) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5/proxy/rewriteme">test</a> (200; 17.90401ms) Feb 2 15:07:53.225: INFO: (19) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 19.372188ms) Feb 2 15:07:53.229: INFO: (19) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:443/proxy/tlsrewritem... (200; 22.793ms) Feb 2 15:07:53.230: INFO: (19) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 23.46973ms) Feb 2 15:07:53.230: INFO: (19) /api/v1/namespaces/proxy-8528/pods/https:proxy-service-b6rjb-pq9q5:462/proxy/: tls qux (200; 24.143693ms) Feb 2 15:07:53.230: INFO: (19) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:160/proxy/: foo (200; 24.331062ms) Feb 2 15:07:53.231: INFO: (19) /api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">test<... (200; 24.720732ms) Feb 2 15:07:53.231: INFO: (19) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/: <a href="/api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:1080/proxy/rewriteme">... (200; 24.852845ms) Feb 2 15:07:53.231: INFO: (19) /api/v1/namespaces/proxy-8528/pods/http:proxy-service-b6rjb-pq9q5:162/proxy/: bar (200; 25.457866ms) Feb 2 15:07:53.232: INFO: (19) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname1/proxy/: foo (200; 26.034254ms) Feb 2 15:07:53.233: INFO: (19) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname2/proxy/: bar (200; 27.579934ms) Feb 2 15:07:53.233: INFO: (19) /api/v1/namespaces/proxy-8528/services/proxy-service-b6rjb:portname1/proxy/: foo (200; 27.528657ms) Feb 2 15:07:53.233: INFO: (19) /api/v1/namespaces/proxy-8528/services/http:proxy-service-b6rjb:portname2/proxy/: bar (200; 27.417929ms) Feb 2 15:07:53.233: INFO: (19) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname2/proxy/: tls qux (200; 27.465792ms) Feb 2 15:07:53.237: INFO: (19) /api/v1/namespaces/proxy-8528/services/https:proxy-service-b6rjb:tlsportname1/proxy/: tls baz (200; 31.293385ms) �[1mSTEP�[0m: deleting ReplicationController proxy-service-b6rjb in namespace proxy-8528, will wait for the garbage collector to delete the pods Feb 2 15:07:53.296: INFO: Deleting ReplicationController proxy-service-b6rjb took: 5.312799ms Feb 2 15:07:53.397: INFO: Terminating ReplicationController proxy-service-b6rjb pods took: 100.566221ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:07:55.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-8528" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":7,"skipped":67,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:07:11.416: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-4893 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Initializing watcher for selector baz=blah,foo=bar �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-4893 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-4893 Feb 2 15:07:11.453: INFO: Found 0 stateful pods, waiting for 1 Feb 2 15:07:21.462: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 2 15:07:21.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4893 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 15:07:21.637: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Feb 2 15:07:21.637: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 15:07:21.637: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 2 15:07:21.641: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 2 15:07:31.648: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 2 15:07:31.648: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 15:07:31.668: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999516s Feb 2 15:07:32.671: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993005045s Feb 2 15:07:33.676: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988895024s Feb 2 15:07:34.680: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984811485s Feb 2 15:07:35.684: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.98067482s Feb 2 15:07:36.689: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976346975s Feb 2 15:07:37.693: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.971363826s Feb 2 15:07:38.698: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.967356664s Feb 2 15:07:39.702: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.963086612s Feb 2 15:07:40.707: INFO: Verifying statefulset ss doesn't scale past 1 for another 958.569942ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4893 Feb 2 15:07:41.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4893 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 15:07:41.860: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Feb 2 15:07:41.860: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 2 15:07:41.860: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 2 15:07:41.864: INFO: Found 1 stateful pods, waiting for 3 Feb 2 15:07:51.869: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 15:07:51.869: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 15:07:51.869: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Verifying that stateful set ss was scaled up in order �[1mSTEP�[0m: Scale down will halt with unhealthy stateful pod Feb 2 15:07:51.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4893 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 15:07:52.066: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Feb 2 15:07:52.066: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 15:07:52.066: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 2 15:07:52.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4893 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 15:07:52.232: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Feb 2 15:07:52.232: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 15:07:52.232: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 2 15:07:52.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4893 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 15:07:52.400: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Feb 2 15:07:52.400: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 15:07:52.400: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 2 15:07:52.400: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 15:07:52.404: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Feb 2 15:08:02.416: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 2 15:08:02.416: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 2 15:08:02.416: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 2 15:08:02.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999666s Feb 2 15:08:03.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995315053s Feb 2 15:08:04.437: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990773472s Feb 2 15:08:05.441: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986694049s Feb 2 15:08:06.446: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982129213s Feb 2 15:08:07.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.977456019s Feb 2 15:08:08.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.972729164s Feb 2 15:08:09.462: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.967159541s Feb 2 15:08:10.468: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.961239468s Feb 2 15:08:11.473: INFO: Verifying statefulset ss doesn't scale past 3 for another 955.489785ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4893 Feb 2 15:08:12.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4893 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 15:08:12.647: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Feb 2 15:08:12.647: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 2 15:08:12.647: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 2 15:08:12.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4893 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 15:08:12.795: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Feb 2 15:08:12.795: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 2 15:08:12.795: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 2 15:08:12.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4893 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 15:08:12.960: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Feb 2 15:08:12.960: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 2 15:08:12.960: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 2 15:08:12.960: INFO: Scaling statefulset ss to 0 �[1mSTEP�[0m: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Feb 2 15:08:22.976: INFO: Deleting all statefulset in ns statefulset-4893 Feb 2 15:08:22.980: INFO: Scaling statefulset ss to 0 Feb 2 15:08:22.996: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 15:08:23.003: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:08:23.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-4893" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":17,"skipped":449,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:03:25.393: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4287.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4287.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4287.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4287.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4287.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4287.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4287.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4287.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4287.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4287.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 14.77.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.77.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.77.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.77.14_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4287.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4287.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4287.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4287.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4287.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4287.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4287.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4287.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4287.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4287.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4287.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 14.77.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.77.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.77.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.77.14_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Feb 2 15:07:06.548: INFO: Unable to read wheezy_udp@dns-test-service.dns-4287.svc.cluster.local from pod dns-4287/dns-test-170a3dec-0bc1-4400-9db7-1066d6a6e677: the server is currently unable to handle the request (get pods dns-test-170a3dec-0bc1-4400-9db7-1066d6a6e677) Feb 2 15:08:33.458: FAIL: Unable to read wheezy_tcp@dns-test-service.dns-4287.svc.cluster.local from pod dns-4287/dns-test-170a3dec-0bc1-4400-9db7-1066d6a6e677: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-4287/pods/dns-test-170a3dec-0bc1-4400-9db7-1066d6a6e677/proxy/results/wheezy_tcp@dns-test-service.dns-4287.svc.cluster.local": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc0000a8800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79062a8?, 0xc000130000?}, 0xc0043cf9f8?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79062a8, 0xc000130000}, 0x38?, 0x2d15545?, 0x60?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79062a8, 0xc000130000}, 0x4a?, 0xc0043cfa88?, 0x2467887?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78ceda0?, 0xc000174800?, 0xc0043cfad0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00101c900, 0x10, 0x18}, {0x705047b, 0x7}, 0xc003cc6000, {0x7938928?, 0xc00358f500}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0008f34a0, 0xc003cc6000, {0xc00101c900, 0x10, 0x18}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x452 k8s.io/kubernetes/test/e2e/network.glob..func2.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc35 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008de1a0, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f E0202 15:08:33.459684 17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Feb 2 15:08:33.458: Unable to read wheezy_tcp@dns-test-service.dns-4287.svc.cluster.local from pod dns-4287/dns-test-170a3dec-0bc1-4400-9db7-1066d6a6e677: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-4287/pods/dns-test-170a3dec-0bc1-4400-9db7-1066d6a6e677/proxy/results/wheezy_tcp@dns-test-service.dns-4287.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:222, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc0000a8800})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79062a8?, 0xc000130000?}, 0xc0043cf9f8?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79062a8, 0xc000130000}, 0x38?, 0x2d15545?, 0x60?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79062a8, 0xc000130000}, 0x4a?, 0xc0043cfa88?, 0x2467887?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78ceda0?, 0xc000174800?, 0xc0043cfad0?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00101c900, 0x10, 0x18}, {0x705047b, 0x7}, 0xc003cc6000, {0x7938928?, 0xc00358f500}, 0x0, {0x0, ...})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0008f34a0, 0xc003cc6000, {0xc00101c900, 0x10, 0x18})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x452\nk8s.io/kubernetes/test/e2e/network.glob..func2.5()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc35\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7\nk8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc0008de1a0, 0x72ecb90)\n\t/usr/local/go/src/testing/testing.go:1446 +0x10b\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1493 +0x35f"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 136 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6bb1ac0?, 0xc003ee4140}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x86 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0001182a0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6bb1ac0, 0xc003ee4140}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x7d panic({0x623d460, 0x78c75a0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc000559c80, 0x167}, {0xc0043cf4d0?, 0xc0043cf4e0?, 0x0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000559c80, 0x167}, {0xc0043cf5b0?, 0x7047513?, 0xc0043cf5d8?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x197 k8s.io/kubernetes/test/e2e/framework.Failf({0x70f9eb9?, 0x2d?}, {0xc0043cf800?, 0x0?, 0x0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x12c k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x845 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc0000a8800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79062a8?, 0xc000130000?}, 0xc0043cf9f8?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79062a8, 0xc000130000}, 0x38?, 0x2d15545?, 0x60?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79062a8, 0xc000130000}, 0x4a?, 0xc0043cfa88?, 0x2467887?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78ceda0?, 0xc000174800?, 0xc0043cfad0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00101c900, 0x10, 0x18}, {0x705047b, 0x7}, 0xc003cc6000, {0x7938928?, 0xc00358f500}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0008f34a0, 0xc003cc6000, {0xc00101c900, 0x10, 0x18}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x452 k8s.io/kubernetes/test/e2e/network.glob..func2.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc35 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0043d1310?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb1 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0043d15c0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003a4ae10, 0xc0043d1988?, {0x78ceda0, 0xc000174800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003a4ae10, {0x78ceda0, 0xc000174800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0038c2000, 0xc003a4ae10) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xf1 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0038c2000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1b6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0038c2000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000198070, {0x7faa5412a700, 0xc0008de1a0}, {0x7087b0a, 0x14}, {0xc000767170, 0x3, 0x3}, {0x790a160, 0xc000174800}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4e5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x78d5740?, 0xc0008de1a0}, {0x7087b0a, 0x14}, {0xc00051dc80, 0x3, 0x6?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x189 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x78d5740, 0xc0008de1a0}, {0x7087b0a, 0x14}, {0xc0009d9e20, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0x10a k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008de1a0, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:08:33.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-4287" for this suite. �[91m�[1m• Failure [308.194 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould provide DNS for services [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mFeb 2 15:08:33.458: Unable to read wheezy_tcp@dns-test-service.dns-4287.svc.cluster.local from pod dns-4287/dns-test-170a3dec-0bc1-4400-9db7-1066d6a6e677: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-4287/pods/dns-test-170a3dec-0bc1-4400-9db7-1066d6a6e677/proxy/results/wheezy_tcp@dns-test-service.dns-4287.svc.cluster.local": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:07:56.068: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Feb 2 15:08:04.123: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:04.126: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:04.129: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:04.132: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:04.134: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:04.137: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:04.140: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:04.143: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:04.143: INFO: Lookups using dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local] Feb 2 15:08:09.148: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:09.152: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:09.155: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:09.158: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:09.161: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:09.164: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:09.167: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:09.170: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:09.170: INFO: Lookups using dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local] Feb 2 15:08:14.146: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:14.149: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:14.152: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:14.155: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:14.158: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:14.161: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:14.164: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:14.166: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:14.166: INFO: Lookups using dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local] Feb 2 15:08:19.148: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:19.154: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:19.161: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:19.166: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:19.172: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:19.176: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:19.181: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:19.186: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:19.186: INFO: Lookups using dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local] Feb 2 15:08:24.149: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:24.153: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:24.158: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:24.161: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:24.165: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:24.169: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:24.173: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:24.176: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:24.176: INFO: Lookups using dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local jessie_udp@dns-test-service-2.dns-3134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3134.svc.cluster.local] Feb 2 15:08:29.152: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local from pod dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d: the server could not find the requested resource (get pods dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d) Feb 2 15:08:29.180: INFO: Lookups using dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d failed for: [wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3134.svc.cluster.local] Feb 2 15:08:34.190: INFO: DNS probes using dns-3134/dns-test-04f50474-aabb-4a06-a745-fd8279f14b3d succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:08:34.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-3134" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":8,"skipped":98,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:08:34.325: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Feb 2 15:08:34.364: INFO: The status of Pod labelsupdatef718a3ee-2734-41df-b2c6-49bb8943f982 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:08:36.370: INFO: The status of Pod labelsupdatef718a3ee-2734-41df-b2c6-49bb8943f982 is Running (Ready = true) Feb 2 15:08:36.893: INFO: Successfully updated pod "labelsupdatef718a3ee-2734-41df-b2c6-49bb8943f982" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:08:38.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7348" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":128,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:08:38.933: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 �[1mSTEP�[0m: creating an pod Feb 2 15:08:38.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1140 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.39 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Feb 2 15:08:39.046: INFO: stderr: "" Feb 2 15:08:39.046: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for log generator to start. Feb 2 15:08:39.046: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Feb 2 15:08:39.046: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1140" to be "running and ready, or succeeded" Feb 2 15:08:39.052: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.116758ms Feb 2 15:08:41.056: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.009297537s Feb 2 15:08:41.056: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Feb 2 15:08:41.056: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Feb 2 15:08:41.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1140 logs logs-generator logs-generator' Feb 2 15:08:41.157: INFO: stderr: "" Feb 2 15:08:41.157: INFO: stdout: "I0202 15:08:39.748142 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/5ljx 311\nI0202 15:08:39.948318 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/hqr 205\nI0202 15:08:40.149205 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/c77z 303\nI0202 15:08:40.348428 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/85vc 435\nI0202 15:08:40.548836 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/5wp 423\nI0202 15:08:40.749188 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/jqj4 456\nI0202 15:08:40.948562 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/pcr 459\nI0202 15:08:41.149012 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/g58 530\n" �[1mSTEP�[0m: limiting log lines Feb 2 15:08:41.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1140 logs logs-generator logs-generator --tail=1' Feb 2 15:08:41.246: INFO: stderr: "" Feb 2 15:08:41.246: INFO: stdout: "I0202 15:08:41.149012 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/g58 530\n" Feb 2 15:08:41.246: INFO: got output "I0202 15:08:41.149012 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/g58 530\n" �[1mSTEP�[0m: limiting log bytes Feb 2 15:08:41.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1140 logs logs-generator logs-generator --limit-bytes=1' Feb 2 15:08:41.329: INFO: stderr: "" Feb 2 15:08:41.329: INFO: stdout: "I" Feb 2 15:08:41.329: INFO: got output "I" �[1mSTEP�[0m: exposing timestamps Feb 2 15:08:41.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1140 logs logs-generator logs-generator --tail=1 --timestamps' Feb 2 15:08:41.417: INFO: stderr: "" Feb 2 15:08:41.417: INFO: stdout: "2023-02-02T15:08:41.348702958Z I0202 15:08:41.348393 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/z776 531\n" Feb 2 15:08:41.417: INFO: got output "2023-02-02T15:08:41.348702958Z I0202 15:08:41.348393 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/z776 531\n" �[1mSTEP�[0m: restricting to a time range Feb 2 15:08:43.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1140 logs logs-generator logs-generator --since=1s' Feb 2 15:08:44.016: INFO: stderr: "" Feb 2 15:08:44.016: INFO: stdout: "I0202 15:08:43.148226 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/7ltg 467\nI0202 15:08:43.348785 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/qmm 236\nI0202 15:08:43.549184 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/lgn 395\nI0202 15:08:43.748651 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/lwzm 594\nI0202 15:08:43.949090 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/2bmt 525\n" Feb 2 15:08:44.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1140 logs logs-generator logs-generator --since=24h' Feb 2 15:08:44.120: INFO: stderr: "" Feb 2 15:08:44.120: INFO: stdout: "I0202 15:08:39.748142 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/5ljx 311\nI0202 15:08:39.948318 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/hqr 205\nI0202 15:08:40.149205 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/c77z 303\nI0202 15:08:40.348428 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/85vc 435\nI0202 15:08:40.548836 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/5wp 423\nI0202 15:08:40.749188 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/jqj4 456\nI0202 15:08:40.948562 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/pcr 459\nI0202 15:08:41.149012 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/g58 530\nI0202 15:08:41.348393 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/z776 531\nI0202 15:08:41.548950 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/mq98 427\nI0202 15:08:41.748279 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/vz6 252\nI0202 15:08:41.948761 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/gpdk 392\nI0202 15:08:42.149042 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/84sk 491\nI0202 15:08:42.348406 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/gp6s 327\nI0202 15:08:42.548884 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/hbtl 217\nI0202 15:08:42.748234 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/xwk 432\nI0202 15:08:42.948728 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/7wv 577\nI0202 15:08:43.148226 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/7ltg 467\nI0202 15:08:43.348785 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/qmm 236\nI0202 15:08:43.549184 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/lgn 395\nI0202 15:08:43.748651 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/lwzm 594\nI0202 15:08:43.949090 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/2bmt 525\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 Feb 2 15:08:44.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1140 delete pod logs-generator' Feb 2 15:08:45.438: INFO: stderr: "" Feb 2 15:08:45.438: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:08:45.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1140" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":10,"skipped":135,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:08:45.473: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ServiceAccount �[1mSTEP�[0m: watching for the ServiceAccount to be added �[1mSTEP�[0m: patching the ServiceAccount �[1mSTEP�[0m: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) �[1mSTEP�[0m: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:08:45.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-9417" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":11,"skipped":150,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:03:06.491: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Feb 2 15:03:06.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 create -f -' Feb 2 15:03:07.174: INFO: stderr: "" Feb 2 15:03:07.174: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Feb 2 15:03:07.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 15:03:07.260: INFO: stderr: "" Feb 2 15:03:07.260: INFO: stdout: "update-demo-nautilus-c2hpw update-demo-nautilus-rt57h " Feb 2 15:03:07.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 get pods update-demo-nautilus-c2hpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 15:03:07.334: INFO: stderr: "" Feb 2 15:03:07.335: INFO: stdout: "" Feb 2 15:03:07.335: INFO: update-demo-nautilus-c2hpw is created but not running Feb 2 15:03:12.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 15:03:12.409: INFO: stderr: "" Feb 2 15:03:12.409: INFO: stdout: "update-demo-nautilus-c2hpw update-demo-nautilus-rt57h " Feb 2 15:03:12.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 get pods update-demo-nautilus-c2hpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 15:03:12.478: INFO: stderr: "" Feb 2 15:03:12.478: INFO: stdout: "true" Feb 2 15:03:12.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 get pods update-demo-nautilus-c2hpw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 15:03:12.549: INFO: stderr: "" Feb 2 15:03:12.549: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 15:03:12.549: INFO: validating pod update-demo-nautilus-c2hpw Feb 2 15:06:46.073: INFO: update-demo-nautilus-c2hpw is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-c2hpw) Feb 2 15:06:51.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 15:06:51.151: INFO: stderr: "" Feb 2 15:06:51.151: INFO: stdout: "update-demo-nautilus-c2hpw update-demo-nautilus-rt57h " Feb 2 15:06:51.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 get pods update-demo-nautilus-c2hpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 15:06:51.224: INFO: stderr: "" Feb 2 15:06:51.224: INFO: stdout: "true" Feb 2 15:06:51.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 get pods update-demo-nautilus-c2hpw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 15:06:51.295: INFO: stderr: "" Feb 2 15:06:51.295: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 15:06:51.295: INFO: validating pod update-demo-nautilus-c2hpw Feb 2 15:10:25.204: INFO: update-demo-nautilus-c2hpw is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-c2hpw) Feb 2 15:10:30.207: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 +0x1ec k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000605380, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: using delete to clean up resources Feb 2 15:10:30.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 delete --grace-period=0 --force -f -' Feb 2 15:10:30.297: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 15:10:30.297: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 2 15:10:30.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 get rc,svc -l name=update-demo --no-headers' Feb 2 15:10:30.417: INFO: stderr: "No resources found in kubectl-8963 namespace.\n" Feb 2 15:10:30.418: INFO: stdout: "" Feb 2 15:10:30.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8963 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 2 15:10:30.535: INFO: stderr: "" Feb 2 15:10:30.536: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:10:30.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8963" for this suite. �[91m�[1m• Failure [444.060 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould create and stop a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mFeb 2 15:10:30.207: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":10,"skipped":190,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:10:30.553: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Feb 2 15:10:30.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 create -f -' Feb 2 15:10:30.831: INFO: stderr: "" Feb 2 15:10:30.832: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Feb 2 15:10:30.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 15:10:30.933: INFO: stderr: "" Feb 2 15:10:30.933: INFO: stdout: "update-demo-nautilus-67bt6 update-demo-nautilus-d96vc " Feb 2 15:10:30.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 get pods update-demo-nautilus-67bt6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 15:10:31.027: INFO: stderr: "" Feb 2 15:10:31.027: INFO: stdout: "" Feb 2 15:10:31.027: INFO: update-demo-nautilus-67bt6 is created but not running Feb 2 15:10:36.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 15:10:36.103: INFO: stderr: "" Feb 2 15:10:36.103: INFO: stdout: "update-demo-nautilus-67bt6 update-demo-nautilus-d96vc " Feb 2 15:10:36.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 get pods update-demo-nautilus-67bt6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 15:10:36.178: INFO: stderr: "" Feb 2 15:10:36.178: INFO: stdout: "true" Feb 2 15:10:36.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 get pods update-demo-nautilus-67bt6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 15:10:36.253: INFO: stderr: "" Feb 2 15:10:36.253: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 15:10:36.253: INFO: validating pod update-demo-nautilus-67bt6 Feb 2 15:10:36.260: INFO: got data: { "image": "nautilus.jpg" } Feb 2 15:10:36.260: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 15:10:36.260: INFO: update-demo-nautilus-67bt6 is verified up and running Feb 2 15:10:36.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 get pods update-demo-nautilus-d96vc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 15:10:36.331: INFO: stderr: "" Feb 2 15:10:36.331: INFO: stdout: "true" Feb 2 15:10:36.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 get pods update-demo-nautilus-d96vc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 15:10:36.408: INFO: stderr: "" Feb 2 15:10:36.408: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 15:10:36.409: INFO: validating pod update-demo-nautilus-d96vc Feb 2 15:10:36.414: INFO: got data: { "image": "nautilus.jpg" } Feb 2 15:10:36.414: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 15:10:36.414: INFO: update-demo-nautilus-d96vc is verified up and running �[1mSTEP�[0m: using delete to clean up resources Feb 2 15:10:36.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 delete --grace-period=0 --force -f -' Feb 2 15:10:36.490: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 15:10:36.490: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 2 15:10:36.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 get rc,svc -l name=update-demo --no-headers' Feb 2 15:10:36.593: INFO: stderr: "No resources found in kubectl-1753 namespace.\n" Feb 2 15:10:36.593: INFO: stdout: "" Feb 2 15:10:36.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1753 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 2 15:10:36.680: INFO: stderr: "" Feb 2 15:10:36.680: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:10:36.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1753" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":11,"skipped":190,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:10:36.705: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-w8wh �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Feb 2 15:10:36.853: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-w8wh" in namespace "subpath-4689" to be "Succeeded or Failed" Feb 2 15:10:36.861: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.600211ms Feb 2 15:10:38.867: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=true. Elapsed: 2.011998445s Feb 2 15:10:40.872: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=true. Elapsed: 4.017561091s Feb 2 15:10:42.877: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=true. Elapsed: 6.022390774s Feb 2 15:10:44.882: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=true. Elapsed: 8.027766919s Feb 2 15:10:46.886: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=true. Elapsed: 10.031197445s Feb 2 15:10:48.891: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=true. Elapsed: 12.036826711s Feb 2 15:10:50.896: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=true. Elapsed: 14.041929822s Feb 2 15:10:52.901: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=true. Elapsed: 16.046752691s Feb 2 15:10:54.907: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=true. Elapsed: 18.052214853s Feb 2 15:10:56.911: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=true. Elapsed: 20.056441497s Feb 2 15:10:58.916: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Running", Reason="", readiness=false. Elapsed: 22.060996318s Feb 2 15:11:00.921: INFO: Pod "pod-subpath-test-projected-w8wh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.066098621s �[1mSTEP�[0m: Saw pod success Feb 2 15:11:00.921: INFO: Pod "pod-subpath-test-projected-w8wh" satisfied condition "Succeeded or Failed" Feb 2 15:11:00.925: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-subpath-test-projected-w8wh container test-container-subpath-projected-w8wh: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:11:00.961: INFO: Waiting for pod pod-subpath-test-projected-w8wh to disappear Feb 2 15:11:00.965: INFO: Pod pod-subpath-test-projected-w8wh no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-w8wh Feb 2 15:11:00.965: INFO: Deleting pod "pod-subpath-test-projected-w8wh" in namespace "subpath-4689" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:00.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4689" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":12,"skipped":198,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:01.043: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingress �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Feb 2 15:11:01.096: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Feb 2 15:11:01.107: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Feb 2 15:11:01.125: INFO: waiting for watch events with expected annotations Feb 2 15:11:01.125: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:01.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingress-7326" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":13,"skipped":234,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:01.205: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating an Endpoint �[1mSTEP�[0m: waiting for available Endpoint �[1mSTEP�[0m: listing all Endpoints �[1mSTEP�[0m: updating the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: patching the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: deleting the Endpoint by Collection �[1mSTEP�[0m: waiting for Endpoint deletion �[1mSTEP�[0m: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:01.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7120" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":14,"skipped":251,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:01.307: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:11:01.331: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 2 15:11:03.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9501 --namespace=crd-publish-openapi-9501 create -f -' Feb 2 15:11:04.844: INFO: stderr: "" Feb 2 15:11:04.844: INFO: stdout: "e2e-test-crd-publish-openapi-5822-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 2 15:11:04.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9501 --namespace=crd-publish-openapi-9501 delete e2e-test-crd-publish-openapi-5822-crds test-cr' Feb 2 15:11:04.934: INFO: stderr: "" Feb 2 15:11:04.934: INFO: stdout: "e2e-test-crd-publish-openapi-5822-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Feb 2 15:11:04.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9501 --namespace=crd-publish-openapi-9501 apply -f -' Feb 2 15:11:05.170: INFO: stderr: "" Feb 2 15:11:05.170: INFO: stdout: "e2e-test-crd-publish-openapi-5822-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 2 15:11:05.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9501 --namespace=crd-publish-openapi-9501 delete e2e-test-crd-publish-openapi-5822-crds test-cr' Feb 2 15:11:05.256: INFO: stderr: "" Feb 2 15:11:05.256: INFO: stdout: "e2e-test-crd-publish-openapi-5822-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR without validation schema Feb 2 15:11:05.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9501 explain e2e-test-crd-publish-openapi-5822-crds' Feb 2 15:11:05.476: INFO: stderr: "" Feb 2 15:11:05.476: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5822-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:07.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-9501" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":15,"skipped":259,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:07.794: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:11:07.862: INFO: The status of Pod pod-secrets-7bafd232-ed2a-47aa-bc75-83d161bf4f6f is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:11:09.868: INFO: The status of Pod pod-secrets-7bafd232-ed2a-47aa-bc75-83d161bf4f6f is Running (Ready = true) �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:09.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-356" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":16,"skipped":272,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:09.913: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:11:09.958: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 2 15:11:14.965: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: Scaling up "test-rs" replicaset Feb 2 15:11:14.974: INFO: Updating replica set "test-rs" �[1mSTEP�[0m: patching the ReplicaSet Feb 2 15:11:14.986: INFO: observed ReplicaSet test-rs in namespace replicaset-3008 with ReadyReplicas 1, AvailableReplicas 1 Feb 2 15:11:15.002: INFO: observed ReplicaSet test-rs in namespace replicaset-3008 with ReadyReplicas 1, AvailableReplicas 1 Feb 2 15:11:15.023: INFO: observed ReplicaSet test-rs in namespace replicaset-3008 with ReadyReplicas 1, AvailableReplicas 1 Feb 2 15:11:15.030: INFO: observed ReplicaSet test-rs in namespace replicaset-3008 with ReadyReplicas 1, AvailableReplicas 1 Feb 2 15:11:15.947: INFO: observed ReplicaSet test-rs in namespace replicaset-3008 with ReadyReplicas 2, AvailableReplicas 2 Feb 2 15:11:16.789: INFO: observed Replicaset test-rs in namespace replicaset-3008 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:16.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-3008" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":17,"skipped":276,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:16.806: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename discovery �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 �[1mSTEP�[0m: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:11:17.366: INFO: Checking APIGroup: apiregistration.k8s.io Feb 2 15:11:17.368: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Feb 2 15:11:17.368: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] Feb 2 15:11:17.368: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Feb 2 15:11:17.368: INFO: Checking APIGroup: apps Feb 2 15:11:17.369: INFO: PreferredVersion.GroupVersion: apps/v1 Feb 2 15:11:17.369: INFO: Versions found [{apps/v1 v1}] Feb 2 15:11:17.369: INFO: apps/v1 matches apps/v1 Feb 2 15:11:17.369: INFO: Checking APIGroup: events.k8s.io Feb 2 15:11:17.371: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Feb 2 15:11:17.371: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Feb 2 15:11:17.371: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Feb 2 15:11:17.371: INFO: Checking APIGroup: authentication.k8s.io Feb 2 15:11:17.372: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Feb 2 15:11:17.372: INFO: Versions found [{authentication.k8s.io/v1 v1}] Feb 2 15:11:17.372: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Feb 2 15:11:17.372: INFO: Checking APIGroup: authorization.k8s.io Feb 2 15:11:17.373: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Feb 2 15:11:17.373: INFO: Versions found [{authorization.k8s.io/v1 v1}] Feb 2 15:11:17.373: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Feb 2 15:11:17.373: INFO: Checking APIGroup: autoscaling Feb 2 15:11:17.374: INFO: PreferredVersion.GroupVersion: autoscaling/v2 Feb 2 15:11:17.374: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Feb 2 15:11:17.374: INFO: autoscaling/v2 matches autoscaling/v2 Feb 2 15:11:17.374: INFO: Checking APIGroup: batch Feb 2 15:11:17.376: INFO: PreferredVersion.GroupVersion: batch/v1 Feb 2 15:11:17.376: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Feb 2 15:11:17.376: INFO: batch/v1 matches batch/v1 Feb 2 15:11:17.376: INFO: Checking APIGroup: certificates.k8s.io Feb 2 15:11:17.377: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Feb 2 15:11:17.378: INFO: Versions found [{certificates.k8s.io/v1 v1}] Feb 2 15:11:17.378: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Feb 2 15:11:17.378: INFO: Checking APIGroup: networking.k8s.io Feb 2 15:11:17.379: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Feb 2 15:11:17.379: INFO: Versions found [{networking.k8s.io/v1 v1}] Feb 2 15:11:17.379: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Feb 2 15:11:17.379: INFO: Checking APIGroup: policy Feb 2 15:11:17.381: INFO: PreferredVersion.GroupVersion: policy/v1 Feb 2 15:11:17.381: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Feb 2 15:11:17.381: INFO: policy/v1 matches policy/v1 Feb 2 15:11:17.381: INFO: Checking APIGroup: rbac.authorization.k8s.io Feb 2 15:11:17.382: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Feb 2 15:11:17.382: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] Feb 2 15:11:17.382: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Feb 2 15:11:17.382: INFO: Checking APIGroup: storage.k8s.io Feb 2 15:11:17.383: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Feb 2 15:11:17.383: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Feb 2 15:11:17.383: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Feb 2 15:11:17.383: INFO: Checking APIGroup: admissionregistration.k8s.io Feb 2 15:11:17.384: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Feb 2 15:11:17.384: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] Feb 2 15:11:17.384: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Feb 2 15:11:17.384: INFO: Checking APIGroup: apiextensions.k8s.io Feb 2 15:11:17.385: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Feb 2 15:11:17.385: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] Feb 2 15:11:17.385: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Feb 2 15:11:17.385: INFO: Checking APIGroup: scheduling.k8s.io Feb 2 15:11:17.386: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Feb 2 15:11:17.386: INFO: Versions found [{scheduling.k8s.io/v1 v1}] Feb 2 15:11:17.386: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Feb 2 15:11:17.386: INFO: Checking APIGroup: coordination.k8s.io Feb 2 15:11:17.387: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Feb 2 15:11:17.388: INFO: Versions found [{coordination.k8s.io/v1 v1}] Feb 2 15:11:17.388: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Feb 2 15:11:17.388: INFO: Checking APIGroup: node.k8s.io Feb 2 15:11:17.389: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Feb 2 15:11:17.389: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Feb 2 15:11:17.389: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Feb 2 15:11:17.389: INFO: Checking APIGroup: discovery.k8s.io Feb 2 15:11:17.390: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Feb 2 15:11:17.390: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Feb 2 15:11:17.390: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Feb 2 15:11:17.390: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Feb 2 15:11:17.391: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 Feb 2 15:11:17.391: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Feb 2 15:11:17.391: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:17.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "discovery-5322" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:08:45.556: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Feb 2 15:10:46.112: INFO: Successfully updated pod "var-expansion-0bf3ab08-18c6-4298-a881-5cc5b6d30882" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Feb 2 15:10:48.120: INFO: Deleting pod "var-expansion-0bf3ab08-18c6-4298-a881-5cc5b6d30882" in namespace "var-expansion-4025" Feb 2 15:10:48.126: INFO: Wait up to 5m0s for pod "var-expansion-0bf3ab08-18c6-4298-a881-5cc5b6d30882" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:20.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-4025" for this suite. �[32m• [SLOW TEST:154.589 seconds]�[0m [sig-node] Variable Expansion �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":12,"skipped":156,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:20.188: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-e1aa3339-44b8-4c03-a031-49295074816d �[1mSTEP�[0m: Creating a pod to test consume configMaps Feb 2 15:11:20.236: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9f32871-d63f-4e40-8314-e3e3f544b702" in namespace "configmap-4467" to be "Succeeded or Failed" Feb 2 15:11:20.239: INFO: Pod "pod-configmaps-e9f32871-d63f-4e40-8314-e3e3f544b702": Phase="Pending", Reason="", readiness=false. Elapsed: 3.658863ms Feb 2 15:11:22.246: INFO: Pod "pod-configmaps-e9f32871-d63f-4e40-8314-e3e3f544b702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010234419s Feb 2 15:11:24.251: INFO: Pod "pod-configmaps-e9f32871-d63f-4e40-8314-e3e3f544b702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015587648s �[1mSTEP�[0m: Saw pod success Feb 2 15:11:24.251: INFO: Pod "pod-configmaps-e9f32871-d63f-4e40-8314-e3e3f544b702" satisfied condition "Succeeded or Failed" Feb 2 15:11:24.254: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-configmaps-e9f32871-d63f-4e40-8314-e3e3f544b702 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:11:24.270: INFO: Waiting for pod pod-configmaps-e9f32871-d63f-4e40-8314-e3e3f544b702 to disappear Feb 2 15:11:24.273: INFO: Pod pod-configmaps-e9f32871-d63f-4e40-8314-e3e3f544b702 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:24.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4467" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":172,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":18,"skipped":278,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:17.403: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Discovering how many secrets are in namespace by default �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Secret �[1mSTEP�[0m: Ensuring resource quota status captures secret creation �[1mSTEP�[0m: Deleting a secret �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:34.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-9976" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":19,"skipped":278,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:34.648: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Feb 2 15:11:34.684: INFO: The status of Pod labelsupdatea01743dd-e1e6-4742-bf5f-590661343a47 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:11:36.689: INFO: The status of Pod labelsupdatea01743dd-e1e6-4742-bf5f-590661343a47 is Running (Ready = true) Feb 2 15:11:37.211: INFO: Successfully updated pod "labelsupdatea01743dd-e1e6-4742-bf5f-590661343a47" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:41.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5464" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":376,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:24.288: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps with label A �[1mSTEP�[0m: creating a watch on configmaps with label B �[1mSTEP�[0m: creating a watch on configmaps with label A or B �[1mSTEP�[0m: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 2 15:11:24.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8112 e02c795c-80a6-4659-90ec-54113cff40d1 5418 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 15:11:24.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8112 e02c795c-80a6-4659-90ec-54113cff40d1 5418 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A and ensuring the correct watchers observe the notification Feb 2 15:11:24.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8112 e02c795c-80a6-4659-90ec-54113cff40d1 5420 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 15:11:24.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8112 e02c795c-80a6-4659-90ec-54113cff40d1 5420 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A again and ensuring the correct watchers observe the notification Feb 2 15:11:24.331: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8112 e02c795c-80a6-4659-90ec-54113cff40d1 5421 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 15:11:24.331: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8112 e02c795c-80a6-4659-90ec-54113cff40d1 5421 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap A and ensuring the correct watchers observe the notification Feb 2 15:11:24.335: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8112 e02c795c-80a6-4659-90ec-54113cff40d1 5422 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 15:11:24.336: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8112 e02c795c-80a6-4659-90ec-54113cff40d1 5422 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 2 15:11:24.340: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8112 7c0db8d4-ce80-4ab8-9903-c032d1d6879a 5423 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 15:11:24.340: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8112 7c0db8d4-ce80-4ab8-9903-c032d1d6879a 5423 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap B and ensuring the correct watchers observe the notification Feb 2 15:11:34.346: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8112 7c0db8d4-ce80-4ab8-9903-c032d1d6879a 5471 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 15:11:34.346: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8112 7c0db8d4-ce80-4ab8-9903-c032d1d6879a 5471 0 2023-02-02 15:11:24 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-02-02 15:11:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:44.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-8112" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":14,"skipped":175,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:41.252: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-c8726984-96c1-4fd1-8e1f-ae999ba34d23 �[1mSTEP�[0m: Creating the pod Feb 2 15:11:41.295: INFO: The status of Pod pod-projected-configmaps-67cd99ba-54a6-4e78-82ac-0d3b00184f7f is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:11:43.299: INFO: The status of Pod pod-projected-configmaps-67cd99ba-54a6-4e78-82ac-0d3b00184f7f is Running (Ready = true) �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-c8726984-96c1-4fd1-8e1f-ae999ba34d23 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:45.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-356" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":380,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:44.378: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Feb 2 15:11:44.413: INFO: Waiting up to 5m0s for pod "downward-api-1ae20859-f55d-49c1-8fc1-ddef737294af" in namespace "downward-api-7108" to be "Succeeded or Failed" Feb 2 15:11:44.417: INFO: Pod "downward-api-1ae20859-f55d-49c1-8fc1-ddef737294af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.813724ms Feb 2 15:11:46.421: INFO: Pod "downward-api-1ae20859-f55d-49c1-8fc1-ddef737294af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007903638s Feb 2 15:11:48.426: INFO: Pod "downward-api-1ae20859-f55d-49c1-8fc1-ddef737294af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01314749s �[1mSTEP�[0m: Saw pod success Feb 2 15:11:48.426: INFO: Pod "downward-api-1ae20859-f55d-49c1-8fc1-ddef737294af" satisfied condition "Succeeded or Failed" Feb 2 15:11:48.430: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod downward-api-1ae20859-f55d-49c1-8fc1-ddef737294af container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:11:48.457: INFO: Waiting for pod downward-api-1ae20859-f55d-49c1-8fc1-ddef737294af to disappear Feb 2 15:11:48.461: INFO: Pod downward-api-1ae20859-f55d-49c1-8fc1-ddef737294af no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:48.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-7108" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":180,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:45.345: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Feb 2 15:11:45.382: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:11:47.387: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Feb 2 15:11:47.399: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:11:49.403: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Feb 2 15:11:49.431: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 2 15:11:49.435: INFO: Pod pod-with-poststart-http-hook still exists Feb 2 15:11:51.435: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 2 15:11:51.440: INFO: Pod pod-with-poststart-http-hook still exists Feb 2 15:11:53.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 2 15:11:53.441: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:53.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-1151" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":382,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:48.537: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:11:58.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-7651" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":16,"skipped":213,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:58.588: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:11:59.405: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:12:02.430: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io/v1 discovery document �[1mSTEP�[0m: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:02.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9315" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9315-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":17,"skipped":213,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:11:53.480: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-1280 �[1mSTEP�[0m: creating service affinity-nodeport in namespace services-1280 �[1mSTEP�[0m: creating replication controller affinity-nodeport in namespace services-1280 I0202 15:11:53.550945 16 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-1280, replica count: 3 I0202 15:11:56.603390 16 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 15:11:56.616: INFO: Creating new exec pod Feb 2 15:11:59.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1280 exec execpod-affinityk8px9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Feb 2 15:11:59.819: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport+ 80\necho hostName\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Feb 2 15:11:59.819: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Feb 2 15:11:59.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1280 exec execpod-affinityk8px9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.232.144 80' Feb 2 15:11:59.988: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.135.232.144 80\nConnection to 10.135.232.144 80 port [tcp/http] succeeded!\n" Feb 2 15:11:59.988: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Feb 2 15:11:59.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1280 exec execpod-affinityk8px9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 32456' Feb 2 15:12:00.170: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 32456\nConnection to 172.18.0.6 32456 port [tcp/*] succeeded!\n" Feb 2 15:12:00.170: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Feb 2 15:12:00.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1280 exec execpod-affinityk8px9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 32456' Feb 2 15:12:00.367: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 32456\nConnection to 172.18.0.4 32456 port [tcp/*] succeeded!\n" Feb 2 15:12:00.367: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Feb 2 15:12:00.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1280 exec execpod-affinityk8px9 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:32456/ ; done' Feb 2 15:12:00.693: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32456/\n" Feb 2 15:12:00.693: INFO: stdout: "\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj\naffinity-nodeport-tqnsj" Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Received response from host: affinity-nodeport-tqnsj Feb 2 15:12:00.693: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport in namespace services-1280, will wait for the garbage collector to delete the pods Feb 2 15:12:00.768: INFO: Deleting ReplicationController affinity-nodeport took: 7.811591ms Feb 2 15:12:00.869: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.537811ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:03.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1280" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":397,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:03.284: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-89cca125-a88b-4758-a315-7b49568b741f �[1mSTEP�[0m: Creating a pod to test consume secrets Feb 2 15:12:03.322: INFO: Waiting up to 5m0s for pod "pod-secrets-bf7e2756-e3d5-4ef4-9c48-8521f5d4448d" in namespace "secrets-9244" to be "Succeeded or Failed" Feb 2 15:12:03.326: INFO: Pod "pod-secrets-bf7e2756-e3d5-4ef4-9c48-8521f5d4448d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.84235ms Feb 2 15:12:05.331: INFO: Pod "pod-secrets-bf7e2756-e3d5-4ef4-9c48-8521f5d4448d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008902499s Feb 2 15:12:07.336: INFO: Pod "pod-secrets-bf7e2756-e3d5-4ef4-9c48-8521f5d4448d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013969752s �[1mSTEP�[0m: Saw pod success Feb 2 15:12:07.336: INFO: Pod "pod-secrets-bf7e2756-e3d5-4ef4-9c48-8521f5d4448d" satisfied condition "Succeeded or Failed" Feb 2 15:12:07.339: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-secrets-bf7e2756-e3d5-4ef4-9c48-8521f5d4448d container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:12:07.362: INFO: Waiting for pod pod-secrets-bf7e2756-e3d5-4ef4-9c48-8521f5d4448d to disappear Feb 2 15:12:07.364: INFO: Pod pod-secrets-bf7e2756-e3d5-4ef4-9c48-8521f5d4448d no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:07.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9244" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":443,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:07.430: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:12:08.129: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:12:11.161: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:12:11.166: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:14.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-1208" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":25,"skipped":478,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:14.388: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: expected 0 rs, got 1 rs �[1mSTEP�[0m: expected 0 pods, got 2 pods �[1mSTEP�[0m: Gathering metrics Feb 2 15:12:15.531: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-qt17ut-x7fnr-d4zrz is Running (Ready = true) Feb 2 15:12:15.605: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:15.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-4176" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":26,"skipped":483,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:15.628: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pods Feb 2 15:12:15.666: INFO: created test-pod-1 Feb 2 15:12:15.672: INFO: created test-pod-2 Feb 2 15:12:15.686: INFO: created test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be running Feb 2 15:12:15.686: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-7782' to be running and ready Feb 2 15:12:15.752: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:12:15.752: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:12:15.752: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Feb 2 15:12:15.752: INFO: 0 / 3 pods in namespace 'pods-7782' are running and ready (0 seconds elapsed) Feb 2 15:12:15.752: INFO: expected 0 pod replicas in namespace 'pods-7782', 0 are Running and Ready. Feb 2 15:12:15.752: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 15:12:15.752: INFO: test-pod-1 k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC }] Feb 2 15:12:15.752: INFO: test-pod-2 k8s-upgrade-and-conformance-qt17ut-worker-cnnqas Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC }] Feb 2 15:12:15.752: INFO: test-pod-3 k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 15:12:15 +0000 UTC }] Feb 2 15:12:15.752: INFO: Feb 2 15:12:17.765: INFO: 3 / 3 pods in namespace 'pods-7782' are running and ready (2 seconds elapsed) Feb 2 15:12:17.765: INFO: expected 0 pod replicas in namespace 'pods-7782', 0 are Running and Ready. �[1mSTEP�[0m: waiting for all pods to be deleted Feb 2 15:12:17.797: INFO: Pod quantity 3 is different from expected quantity 0 Feb 2 15:12:18.804: INFO: Pod quantity 3 is different from expected quantity 0 Feb 2 15:12:19.804: INFO: Pod quantity 2 is different from expected quantity 0 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:20.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-7782" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":27,"skipped":488,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:20.829: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename certificates �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/certificates.k8s.io �[1mSTEP�[0m: getting /apis/certificates.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Feb 2 15:12:21.816: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Feb 2 15:12:21.829: INFO: waiting for watch events with expected annotations Feb 2 15:12:21.829: INFO: saw patched and updated annotations �[1mSTEP�[0m: getting /approval �[1mSTEP�[0m: patching /approval �[1mSTEP�[0m: updating /approval �[1mSTEP�[0m: getting /status �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:21.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "certificates-6673" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":28,"skipped":493,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:02.538: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:23.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-2114" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":227,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:21.924: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-6c5e2352-9b84-4656-a07f-538c84bd05ac �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-197700e3-148d-4ee8-a8d2-9f48e8ce683b �[1mSTEP�[0m: Creating the pod Feb 2 15:12:21.978: INFO: The status of Pod pod-projected-configmaps-737e1158-7982-44e3-aba3-dd52ae7f9305 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:12:23.982: INFO: The status of Pod pod-projected-configmaps-737e1158-7982-44e3-aba3-dd52ae7f9305 is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-6c5e2352-9b84-4656-a07f-538c84bd05ac �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-197700e3-148d-4ee8-a8d2-9f48e8ce683b �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-ca7f9bfd-b07d-4dea-9623-87d1ca09f865 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:28.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-707" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":505,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:28.107: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Feb 2 15:12:28.142: INFO: The status of Pod pod-update-activedeadlineseconds-a9d478c3-9b05-466d-b2c5-9fac5a2775f9 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:12:30.147: INFO: The status of Pod pod-update-activedeadlineseconds-a9d478c3-9b05-466d-b2c5-9fac5a2775f9 is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Feb 2 15:12:30.668: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a9d478c3-9b05-466d-b2c5-9fac5a2775f9" Feb 2 15:12:30.669: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a9d478c3-9b05-466d-b2c5-9fac5a2775f9" in namespace "pods-8504" to be "terminated due to deadline exceeded" Feb 2 15:12:30.675: INFO: Pod "pod-update-activedeadlineseconds-a9d478c3-9b05-466d-b2c5-9fac5a2775f9": Phase="Running", Reason="", readiness=true. Elapsed: 6.382928ms Feb 2 15:12:32.681: INFO: Pod "pod-update-activedeadlineseconds-a9d478c3-9b05-466d-b2c5-9fac5a2775f9": Phase="Running", Reason="", readiness=true. Elapsed: 2.01225501s Feb 2 15:12:34.687: INFO: Pod "pod-update-activedeadlineseconds-a9d478c3-9b05-466d-b2c5-9fac5a2775f9": Phase="Running", Reason="", readiness=false. Elapsed: 4.017808912s Feb 2 15:12:36.692: INFO: Pod "pod-update-activedeadlineseconds-a9d478c3-9b05-466d-b2c5-9fac5a2775f9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.023125796s Feb 2 15:12:36.692: INFO: Pod "pod-update-activedeadlineseconds-a9d478c3-9b05-466d-b2c5-9fac5a2775f9" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:36.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8504" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":526,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:36.732: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Feb 2 15:12:36.765: INFO: Waiting up to 5m0s for pod "security-context-6a17ebee-cfa8-42c7-abb2-5656b1b1f3b0" in namespace "security-context-4412" to be "Succeeded or Failed" Feb 2 15:12:36.768: INFO: Pod "security-context-6a17ebee-cfa8-42c7-abb2-5656b1b1f3b0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.603592ms Feb 2 15:12:38.776: INFO: Pod "security-context-6a17ebee-cfa8-42c7-abb2-5656b1b1f3b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011371365s Feb 2 15:12:40.782: INFO: Pod "security-context-6a17ebee-cfa8-42c7-abb2-5656b1b1f3b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01759775s �[1mSTEP�[0m: Saw pod success Feb 2 15:12:40.782: INFO: Pod "security-context-6a17ebee-cfa8-42c7-abb2-5656b1b1f3b0" satisfied condition "Succeeded or Failed" Feb 2 15:12:40.788: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod security-context-6a17ebee-cfa8-42c7-abb2-5656b1b1f3b0 container test-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:12:40.820: INFO: Waiting for pod security-context-6a17ebee-cfa8-42c7-abb2-5656b1b1f3b0 to disappear Feb 2 15:12:40.825: INFO: Pod security-context-6a17ebee-cfa8-42c7-abb2-5656b1b1f3b0 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:40.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-4412" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":542,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:40.931: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: starting the proxy server Feb 2 15:12:40.977: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-861 proxy -p 0 --disable-filter' �[1mSTEP�[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:41.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-861" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":32,"skipped":573,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:23.928: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status captures configMap creation �[1mSTEP�[0m: Deleting a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:12:53.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-4016" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":19,"skipped":296,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:53.263: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Feb 2 15:12:53.566: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5" in namespace "projected-7827" to be "Succeeded or Failed" Feb 2 15:12:53.859: INFO: Pod "downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5": Phase="Pending", Reason="", readiness=false. Elapsed: 293.269099ms Feb 2 15:12:55.897: INFO: Pod "downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330955412s Feb 2 15:12:57.906: INFO: Pod "downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340760139s Feb 2 15:12:59.938: INFO: Pod "downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.37265374s Feb 2 15:13:02.048: INFO: Pod "downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.482014293s Feb 2 15:13:04.102: INFO: Pod "downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.536796923s Feb 2 15:13:06.109: INFO: Pod "downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.543443373s Feb 2 15:13:08.116: INFO: Pod "downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.550446841s �[1mSTEP�[0m: Saw pod success Feb 2 15:13:08.116: INFO: Pod "downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5" satisfied condition "Succeeded or Failed" Feb 2 15:13:08.126: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5 container client-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:13:08.160: INFO: Waiting for pod downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5 to disappear Feb 2 15:13:08.167: INFO: Pod downwardapi-volume-b2fc07ee-c3ce-4ec9-84bd-d0a410ed4be5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:08.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7827" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":312,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:08.228: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Feb 2 15:13:10.349: INFO: DNS probes using dns-3417/dns-test-84cd57fc-3a61-44ea-b364-1687205c4a90 succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:10.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-3417" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":21,"skipped":325,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:10.552: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:13:10.604: INFO: Creating simple deployment test-new-deployment Feb 2 15:13:10.637: INFO: deployment "test-new-deployment" doesn't have the required revision set �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the deployment Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Feb 2 15:13:12.763: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-2807 4113d678-e374-4b69-be55-f78003da62c5 7561 3 2023-02-02 15:13:10 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-02-02 15:13:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:13:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0047241d8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-02-02 15:13:12 +0000 UTC,LastTransitionTime:2023-02-02 15:13:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-5d9fdcc779" has successfully progressed.,LastUpdateTime:2023-02-02 15:13:12 +0000 UTC,LastTransitionTime:2023-02-02 15:13:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 2 15:13:12.780: INFO: New ReplicaSet "test-new-deployment-5d9fdcc779" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-5d9fdcc779 deployment-2807 a3d4e045-f92e-4d75-8143-42aa05cf8bfb 7566 2 2023-02-02 15:13:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 4113d678-e374-4b69-be55-f78003da62c5 0xc0047245d0 0xc0047245d1}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:13:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4113d678-e374-4b69-be55-f78003da62c5\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:13:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004724658 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:13:12.787: INFO: Pod "test-new-deployment-5d9fdcc779-97qbp" is not available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-97qbp test-new-deployment-5d9fdcc779- deployment-2807 57ec72f8-ab2a-4168-90cc-69f678b29cad 7565 0 2023-02-02 15:13:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 a3d4e045-f92e-4d75-8143-42aa05cf8bfb 0xc00393b240 0xc00393b241}] [] [{kube-controller-manager Update v1 2023-02-02 15:13:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3d4e045-f92e-4d75-8143-42aa05cf8bfb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j5ksj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j5ksj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:13:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:13:12.789: INFO: Pod "test-new-deployment-5d9fdcc779-lg5vp" is available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-lg5vp test-new-deployment-5d9fdcc779- deployment-2807 544cab5e-e18d-4232-8586-53e137bf19a7 7556 0 2023-02-02 15:13:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 a3d4e045-f92e-4d75-8143-42aa05cf8bfb 0xc00393b390 0xc00393b391}] [] [{kube-controller-manager Update v1 2023-02-02 15:13:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3d4e045-f92e-4d75-8143-42aa05cf8bfb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:13:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4tks4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4tks4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:13:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:13:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:13:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:13:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.65,StartTime:2023-02-02 15:13:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-02-02 15:13:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://9263e46ffe55e81a2174dc09a134265b806b1d16fd2b02019281f3e438b1c9b2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:12.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-2807" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":22,"skipped":381,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:13.030: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of events Feb 2 15:13:13.086: INFO: created test-event-1 Feb 2 15:13:13.095: INFO: created test-event-2 Feb 2 15:13:13.104: INFO: created test-event-3 �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete collection of events Feb 2 15:13:13.112: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity Feb 2 15:13:13.137: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:13.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-7158" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":23,"skipped":439,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:13.175: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ReplicationController �[1mSTEP�[0m: waiting for RC to be added �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: patching ReplicationController �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: patching ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: fetching ReplicationController status �[1mSTEP�[0m: patching ReplicationController scale �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for ReplicationController's scale to be the max amount �[1mSTEP�[0m: fetching ReplicationController; ensuring that it's patched �[1mSTEP�[0m: updating ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: listing all ReplicationControllers �[1mSTEP�[0m: checking that ReplicationController has expected values �[1mSTEP�[0m: deleting ReplicationControllers by collection �[1mSTEP�[0m: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:15.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-2061" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":24,"skipped":443,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:15.922: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:13:15.959: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Feb 2 15:13:15.979: INFO: The status of Pod pod-exec-websocket-971e09ff-be16-4669-bc13-fea96ac7ff17 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:13:17.985: INFO: The status of Pod pod-exec-websocket-971e09ff-be16-4669-bc13-fea96ac7ff17 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:18.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-6687" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":455,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:08:23.179: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a suspended cronjob �[1mSTEP�[0m: Ensuring no jobs are scheduled �[1mSTEP�[0m: Ensuring no job exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:23.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-8033" for this suite. �[32m• [SLOW TEST:300.267 seconds]�[0m [sig-apps] CronJob �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":18,"skipped":524,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:12:41.212: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Feb 2 15:13:21.530: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-qt17ut-x7fnr-d4zrz is Running (Ready = true) Feb 2 15:13:21.756: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Feb 2 15:13:21.756: INFO: Deleting pod "simpletest.rc-22gpm" in namespace "gc-9455" Feb 2 15:13:21.805: INFO: Deleting pod "simpletest.rc-24ln7" in namespace "gc-9455" Feb 2 15:13:21.841: INFO: Deleting pod "simpletest.rc-28b59" in namespace "gc-9455" Feb 2 15:13:21.860: INFO: Deleting pod "simpletest.rc-29hzb" in namespace "gc-9455" Feb 2 15:13:21.898: INFO: Deleting pod "simpletest.rc-2nv2p" in namespace "gc-9455" Feb 2 15:13:21.954: INFO: Deleting pod "simpletest.rc-2rbmg" in namespace "gc-9455" Feb 2 15:13:22.016: INFO: Deleting pod "simpletest.rc-2wmb9" in namespace "gc-9455" Feb 2 15:13:22.072: INFO: Deleting pod "simpletest.rc-488cc" in namespace "gc-9455" Feb 2 15:13:22.118: INFO: Deleting pod "simpletest.rc-4jln2" in namespace "gc-9455" Feb 2 15:13:22.173: INFO: Deleting pod "simpletest.rc-4rptr" in namespace "gc-9455" Feb 2 15:13:22.248: INFO: Deleting pod "simpletest.rc-4vcs8" in namespace "gc-9455" Feb 2 15:13:22.284: INFO: Deleting pod "simpletest.rc-5446z" in namespace "gc-9455" Feb 2 15:13:22.412: INFO: Deleting pod "simpletest.rc-59ztf" in namespace "gc-9455" Feb 2 15:13:22.452: INFO: Deleting pod "simpletest.rc-5kjmt" in namespace "gc-9455" Feb 2 15:13:22.575: INFO: Deleting pod "simpletest.rc-5wkbl" in namespace "gc-9455" Feb 2 15:13:22.656: INFO: Deleting pod "simpletest.rc-5zccv" in namespace "gc-9455" Feb 2 15:13:22.703: INFO: Deleting pod "simpletest.rc-6bj7d" in namespace "gc-9455" Feb 2 15:13:22.741: INFO: Deleting pod "simpletest.rc-6kr7l" in namespace "gc-9455" Feb 2 15:13:22.807: INFO: Deleting pod "simpletest.rc-6lmfc" in namespace "gc-9455" Feb 2 15:13:22.855: INFO: Deleting pod "simpletest.rc-724md" in namespace "gc-9455" Feb 2 15:13:22.995: INFO: Deleting pod "simpletest.rc-7bp4b" in namespace "gc-9455" Feb 2 15:13:23.076: INFO: Deleting pod "simpletest.rc-7h74b" in namespace "gc-9455" Feb 2 15:13:23.117: INFO: Deleting pod "simpletest.rc-7mshp" in namespace "gc-9455" Feb 2 15:13:23.184: INFO: Deleting pod "simpletest.rc-7rkx2" in namespace "gc-9455" Feb 2 15:13:23.347: INFO: Deleting pod "simpletest.rc-7tghj" in namespace "gc-9455" Feb 2 15:13:23.417: INFO: Deleting pod "simpletest.rc-7x95t" in namespace "gc-9455" Feb 2 15:13:23.499: INFO: Deleting pod "simpletest.rc-85gpt" in namespace "gc-9455" Feb 2 15:13:23.565: INFO: Deleting pod "simpletest.rc-8ctkp" in namespace "gc-9455" Feb 2 15:13:23.622: INFO: Deleting pod "simpletest.rc-8hwq4" in namespace "gc-9455" Feb 2 15:13:23.745: INFO: Deleting pod "simpletest.rc-8phqj" in namespace "gc-9455" Feb 2 15:13:23.832: INFO: Deleting pod "simpletest.rc-95qw5" in namespace "gc-9455" Feb 2 15:13:23.964: INFO: Deleting pod "simpletest.rc-98pfx" in namespace "gc-9455" Feb 2 15:13:24.114: INFO: Deleting pod "simpletest.rc-9dvfv" in namespace "gc-9455" Feb 2 15:13:24.233: INFO: Deleting pod "simpletest.rc-9pwzh" in namespace "gc-9455" Feb 2 15:13:24.357: INFO: Deleting pod "simpletest.rc-b47wv" in namespace "gc-9455" Feb 2 15:13:24.489: INFO: Deleting pod "simpletest.rc-bbxdw" in namespace "gc-9455" Feb 2 15:13:24.561: INFO: Deleting pod "simpletest.rc-bf6w8" in namespace "gc-9455" Feb 2 15:13:24.637: INFO: Deleting pod "simpletest.rc-bk5fk" in namespace "gc-9455" Feb 2 15:13:24.718: INFO: Deleting pod "simpletest.rc-bp44w" in namespace "gc-9455" Feb 2 15:13:24.802: INFO: Deleting pod "simpletest.rc-cpfnp" in namespace "gc-9455" Feb 2 15:13:24.941: INFO: Deleting pod "simpletest.rc-csg7q" in namespace "gc-9455" Feb 2 15:13:25.088: INFO: Deleting pod "simpletest.rc-czsmv" in namespace "gc-9455" Feb 2 15:13:25.204: INFO: Deleting pod "simpletest.rc-dfs7h" in namespace "gc-9455" Feb 2 15:13:25.379: INFO: Deleting pod "simpletest.rc-dg697" in namespace "gc-9455" Feb 2 15:13:25.467: INFO: Deleting pod "simpletest.rc-dpzk6" in namespace "gc-9455" Feb 2 15:13:25.502: INFO: Deleting pod "simpletest.rc-dt4wp" in namespace "gc-9455" Feb 2 15:13:25.546: INFO: Deleting pod "simpletest.rc-dw7kk" in namespace "gc-9455" Feb 2 15:13:25.625: INFO: Deleting pod "simpletest.rc-f6864" in namespace "gc-9455" Feb 2 15:13:25.785: INFO: Deleting pod "simpletest.rc-fxpbl" in namespace "gc-9455" Feb 2 15:13:25.881: INFO: Deleting pod "simpletest.rc-gj4m2" in namespace "gc-9455" Feb 2 15:13:25.944: INFO: Deleting pod "simpletest.rc-h4mqc" in namespace "gc-9455" Feb 2 15:13:26.001: INFO: Deleting pod "simpletest.rc-hq4bt" in namespace "gc-9455" Feb 2 15:13:26.082: INFO: Deleting pod "simpletest.rc-hqf27" in namespace "gc-9455" Feb 2 15:13:26.137: INFO: Deleting pod "simpletest.rc-j62ln" in namespace "gc-9455" Feb 2 15:13:26.232: INFO: Deleting pod "simpletest.rc-jwdlv" in namespace "gc-9455" Feb 2 15:13:26.299: INFO: Deleting pod "simpletest.rc-kcqfv" in namespace "gc-9455" Feb 2 15:13:26.343: INFO: Deleting pod "simpletest.rc-kgm8n" in namespace "gc-9455" Feb 2 15:13:26.366: INFO: Deleting pod "simpletest.rc-lk5vl" in namespace "gc-9455" Feb 2 15:13:26.399: INFO: Deleting pod "simpletest.rc-lkqnt" in namespace "gc-9455" Feb 2 15:13:26.458: INFO: Deleting pod "simpletest.rc-lp92d" in namespace "gc-9455" Feb 2 15:13:26.537: INFO: Deleting pod "simpletest.rc-lzwxw" in namespace "gc-9455" Feb 2 15:13:26.768: INFO: Deleting pod "simpletest.rc-ml7rf" in namespace "gc-9455" Feb 2 15:13:26.894: INFO: Deleting pod "simpletest.rc-n476l" in namespace "gc-9455" Feb 2 15:13:27.071: INFO: Deleting pod "simpletest.rc-n88bs" in namespace "gc-9455" Feb 2 15:13:27.225: INFO: Deleting pod "simpletest.rc-nbqhr" in namespace "gc-9455" Feb 2 15:13:27.342: INFO: Deleting pod "simpletest.rc-nklcp" in namespace "gc-9455" Feb 2 15:13:27.533: INFO: Deleting pod "simpletest.rc-ntp5q" in namespace "gc-9455" Feb 2 15:13:27.653: INFO: Deleting pod "simpletest.rc-nv2d4" in namespace "gc-9455" Feb 2 15:13:27.921: INFO: Deleting pod "simpletest.rc-p2kxf" in namespace "gc-9455" Feb 2 15:13:28.013: INFO: Deleting pod "simpletest.rc-p75ms" in namespace "gc-9455" Feb 2 15:13:28.084: INFO: Deleting pod "simpletest.rc-pccr2" in namespace "gc-9455" Feb 2 15:13:28.199: INFO: Deleting pod "simpletest.rc-prlv9" in namespace "gc-9455" Feb 2 15:13:28.293: INFO: Deleting pod "simpletest.rc-ptwm8" in namespace "gc-9455" Feb 2 15:13:28.346: INFO: Deleting pod "simpletest.rc-qcmc7" in namespace "gc-9455" Feb 2 15:13:28.514: INFO: Deleting pod "simpletest.rc-qcs9m" in namespace "gc-9455" Feb 2 15:13:28.715: INFO: Deleting pod "simpletest.rc-qg79f" in namespace "gc-9455" Feb 2 15:13:28.934: INFO: Deleting pod "simpletest.rc-qzbzs" in namespace "gc-9455" Feb 2 15:13:29.036: INFO: Deleting pod "simpletest.rc-r54fx" in namespace "gc-9455" Feb 2 15:13:29.167: INFO: Deleting pod "simpletest.rc-rb96f" in namespace "gc-9455" Feb 2 15:13:29.245: INFO: Deleting pod "simpletest.rc-rlbc6" in namespace "gc-9455" Feb 2 15:13:29.343: INFO: Deleting pod "simpletest.rc-rqrkz" in namespace "gc-9455" Feb 2 15:13:29.454: INFO: Deleting pod "simpletest.rc-rvvrp" in namespace "gc-9455" Feb 2 15:13:29.548: INFO: Deleting pod "simpletest.rc-rxf2d" in namespace "gc-9455" Feb 2 15:13:29.693: INFO: Deleting pod "simpletest.rc-sgvf5" in namespace "gc-9455" Feb 2 15:13:29.788: INFO: Deleting pod "simpletest.rc-sv4xr" in namespace "gc-9455" Feb 2 15:13:29.829: INFO: Deleting pod "simpletest.rc-t4fp8" in namespace "gc-9455" Feb 2 15:13:29.979: INFO: Deleting pod "simpletest.rc-tfsml" in namespace "gc-9455" Feb 2 15:13:30.185: INFO: Deleting pod "simpletest.rc-tnsp9" in namespace "gc-9455" Feb 2 15:13:30.344: INFO: Deleting pod "simpletest.rc-v5ml5" in namespace "gc-9455" Feb 2 15:13:30.617: INFO: Deleting pod "simpletest.rc-v7rtx" in namespace "gc-9455" Feb 2 15:13:30.880: INFO: Deleting pod "simpletest.rc-vczxf" in namespace "gc-9455" Feb 2 15:13:31.017: INFO: Deleting pod "simpletest.rc-vnxh4" in namespace "gc-9455" Feb 2 15:13:31.141: INFO: Deleting pod "simpletest.rc-vq4hg" in namespace "gc-9455" Feb 2 15:13:31.331: INFO: Deleting pod "simpletest.rc-vtbfp" in namespace "gc-9455" Feb 2 15:13:31.510: INFO: Deleting pod "simpletest.rc-wq7ml" in namespace "gc-9455" Feb 2 15:13:31.792: INFO: Deleting pod "simpletest.rc-x6bx2" in namespace "gc-9455" Feb 2 15:13:31.920: INFO: Deleting pod "simpletest.rc-xsd7f" in namespace "gc-9455" Feb 2 15:13:32.051: INFO: Deleting pod "simpletest.rc-xwgbb" in namespace "gc-9455" Feb 2 15:13:32.332: INFO: Deleting pod "simpletest.rc-xz89c" in namespace "gc-9455" Feb 2 15:13:32.429: INFO: Deleting pod "simpletest.rc-zptcz" in namespace "gc-9455" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:32.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-9455" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":33,"skipped":575,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":6,"skipped":233,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:08:33.593: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6347.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6347.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6347.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6347.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6347.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6347.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6347.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6347.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6347.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6347.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6347.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6347.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 35.207.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.207.35_udp@PTR;check="$$(dig +tcp +noall +answer +search 35.207.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.207.35_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6347.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6347.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6347.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6347.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6347.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6347.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6347.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6347.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6347.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6347.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6347.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6347.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 35.207.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.207.35_udp@PTR;check="$$(dig +tcp +noall +answer +search 35.207.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.207.35_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Feb 2 15:12:09.652: INFO: Unable to read wheezy_udp@dns-test-service.dns-6347.svc.cluster.local from pod dns-6347/dns-test-fa1f6f8a-2f53-4da6-bc4c-fd22da060d4b: the server is currently unable to handle the request (get pods dns-test-fa1f6f8a-2f53-4da6-bc4c-fd22da060d4b) Feb 2 15:13:35.702: FAIL: Unable to read wheezy_tcp@dns-test-service.dns-6347.svc.cluster.local from pod dns-6347/dns-test-fa1f6f8a-2f53-4da6-bc4c-fd22da060d4b: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-6347/pods/dns-test-fa1f6f8a-2f53-4da6-bc4c-fd22da060d4b/proxy/results/wheezy_tcp@dns-test-service.dns-6347.svc.cluster.local": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc0000a8800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79062a8?, 0xc000130000?}, 0xc0043cf9f8?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79062a8, 0xc000130000}, 0x38?, 0x2d15545?, 0x60?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79062a8, 0xc000130000}, 0x4a?, 0xc0043cfa88?, 0x2467887?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78ceda0?, 0xc000174800?, 0xc0043cfad0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc0045d3e00, 0x10, 0x18}, {0x705047b, 0x7}, 0xc000b10000, {0x7938928?, 0xc00101c300}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0008f34a0, 0xc000b10000, {0xc0045d3e00, 0x10, 0x18}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x452 k8s.io/kubernetes/test/e2e/network.glob..func2.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc35 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008de1a0, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f E0202 15:13:35.705987 17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Feb 2 15:13:35.702: Unable to read wheezy_tcp@dns-test-service.dns-6347.svc.cluster.local from pod dns-6347/dns-test-fa1f6f8a-2f53-4da6-bc4c-fd22da060d4b: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-6347/pods/dns-test-fa1f6f8a-2f53-4da6-bc4c-fd22da060d4b/proxy/results/wheezy_tcp@dns-test-service.dns-6347.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:222, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc0000a8800})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79062a8?, 0xc000130000?}, 0xc0043cf9f8?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79062a8, 0xc000130000}, 0x38?, 0x2d15545?, 0x60?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79062a8, 0xc000130000}, 0x4a?, 0xc0043cfa88?, 0x2467887?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78ceda0?, 0xc000174800?, 0xc0043cfad0?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc0045d3e00, 0x10, 0x18}, {0x705047b, 0x7}, 0xc000b10000, {0x7938928?, 0xc00101c300}, 0x0, {0x0, ...})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0008f34a0, 0xc000b10000, {0xc0045d3e00, 0x10, 0x18})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x452\nk8s.io/kubernetes/test/e2e/network.glob..func2.5()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc35\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7\nk8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc0008de1a0, 0x72ecb90)\n\t/usr/local/go/src/testing/testing.go:1446 +0x10b\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1493 +0x35f"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 136 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6bb1ac0?, 0xc003ff0140}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x86 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0001182a0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6bb1ac0, 0xc003ff0140}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x7d panic({0x623d460, 0x78c75a0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc000559e00, 0x167}, {0xc0043cf4d0?, 0xc0043cf4e0?, 0x0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000559e00, 0x167}, {0xc0043cf5b0?, 0x7047513?, 0xc0043cf5d8?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x197 k8s.io/kubernetes/test/e2e/framework.Failf({0x70f9eb9?, 0x2d?}, {0xc0043cf800?, 0x0?, 0x0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x12c k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x845 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc0000a8800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79062a8?, 0xc000130000?}, 0xc0043cf9f8?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79062a8, 0xc000130000}, 0x38?, 0x2d15545?, 0x60?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79062a8, 0xc000130000}, 0x4a?, 0xc0043cfa88?, 0x2467887?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78ceda0?, 0xc000174800?, 0xc0043cfad0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc0045d3e00, 0x10, 0x18}, {0x705047b, 0x7}, 0xc000b10000, {0x7938928?, 0xc00101c300}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0008f34a0, 0xc000b10000, {0xc0045d3e00, 0x10, 0x18}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x452 k8s.io/kubernetes/test/e2e/network.glob..func2.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc35 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0043d1310?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb1 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0043d15c0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003a4ae10, 0xc0043d1988?, {0x78ceda0, 0xc000174800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003a4ae10, {0x78ceda0, 0xc000174800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0038c2000, 0xc003a4ae10) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xf1 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0038c2000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1b6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0038c2000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000198070, {0x7faa5412a700, 0xc0008de1a0}, {0x7087b0a, 0x14}, {0xc000767170, 0x3, 0x3}, {0x790a160, 0xc000174800}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4e5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x78d5740?, 0xc0008de1a0}, {0x7087b0a, 0x14}, {0xc00051dc80, 0x3, 0x6?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x189 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x78d5740, 0xc0008de1a0}, {0x7087b0a, 0x14}, {0xc0009d9e20, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0x10a k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008de1a0, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:37.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-6347" for this suite. �[91m�[1m• Failure [304.195 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould provide DNS for services [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mFeb 2 15:13:35.702: Unable to read wheezy_tcp@dns-test-service.dns-6347.svc.cluster.local from pod dns-6347/dns-test-fa1f6f8a-2f53-4da6-bc4c-fd22da060d4b: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-6347/pods/dns-test-fa1f6f8a-2f53-4da6-bc4c-fd22da060d4b/proxy/results/wheezy_tcp@dns-test-service.dns-6347.svc.cluster.local": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:33.143: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-projected-all-test-volume-8fd6c7ec-ffb7-42d8-896a-1f4d0882aaa0 �[1mSTEP�[0m: Creating secret with name secret-projected-all-test-volume-67fc82d5-7fe3-4b49-8f31-801177590218 �[1mSTEP�[0m: Creating a pod to test Check all projections for projected volume plugin Feb 2 15:13:33.553: INFO: Waiting up to 5m0s for pod "projected-volume-ba76755d-20de-4199-9213-80460fe8a5cb" in namespace "projected-6860" to be "Succeeded or Failed" Feb 2 15:13:33.672: INFO: Pod "projected-volume-ba76755d-20de-4199-9213-80460fe8a5cb": Phase="Pending", Reason="", readiness=false. Elapsed: 119.736438ms Feb 2 15:13:35.687: INFO: Pod "projected-volume-ba76755d-20de-4199-9213-80460fe8a5cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133937095s Feb 2 15:13:37.707: INFO: Pod "projected-volume-ba76755d-20de-4199-9213-80460fe8a5cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154573802s Feb 2 15:13:39.784: INFO: Pod "projected-volume-ba76755d-20de-4199-9213-80460fe8a5cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.231330685s �[1mSTEP�[0m: Saw pod success Feb 2 15:13:39.784: INFO: Pod "projected-volume-ba76755d-20de-4199-9213-80460fe8a5cb" satisfied condition "Succeeded or Failed" Feb 2 15:13:39.861: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod projected-volume-ba76755d-20de-4199-9213-80460fe8a5cb container projected-all-volume-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:13:40.020: INFO: Waiting for pod projected-volume-ba76755d-20de-4199-9213-80460fe8a5cb to disappear Feb 2 15:13:40.054: INFO: Pod projected-volume-ba76755d-20de-4199-9213-80460fe8a5cb no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:40.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6860" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":605,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:18.354: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-9477 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-9477 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-9477 I0202 15:13:18.470784 15 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-9477, replica count: 3 I0202 15:13:21.522798 15 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 15:13:21.550: INFO: Creating new exec pod Feb 2 15:13:30.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9477 exec execpod-affinity79tp4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Feb 2 15:13:32.273: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-transition 80\n+ echo hostName\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Feb 2 15:13:32.273: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Feb 2 15:13:32.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9477 exec execpod-affinity79tp4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.123.55 80' Feb 2 15:13:33.837: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.123.55 80\nConnection to 10.140.123.55 80 port [tcp/http] succeeded!\n" Feb 2 15:13:33.837: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Feb 2 15:13:33.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9477 exec execpod-affinity79tp4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.140.123.55:80/ ; done' Feb 2 15:13:35.522: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n" Feb 2 15:13:35.522: INFO: stdout: "\naffinity-clusterip-transition-8fhvh\naffinity-clusterip-transition-8fhvh\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-n4hk6\naffinity-clusterip-transition-8fhvh\naffinity-clusterip-transition-n4hk6\naffinity-clusterip-transition-8fhvh\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-8fhvh\naffinity-clusterip-transition-n4hk6\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-8fhvh\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-8fhvh\naffinity-clusterip-transition-8fhvh\naffinity-clusterip-transition-n4hk6" Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-8fhvh Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-8fhvh Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-n4hk6 Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-8fhvh Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-n4hk6 Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-8fhvh Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-8fhvh Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-n4hk6 Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-8fhvh Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-8fhvh Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-8fhvh Feb 2 15:13:35.522: INFO: Received response from host: affinity-clusterip-transition-n4hk6 Feb 2 15:13:35.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9477 exec execpod-affinity79tp4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.140.123.55:80/ ; done' Feb 2 15:13:38.263: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.123.55:80/\n" Feb 2 15:13:38.264: INFO: stdout: "\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj\naffinity-clusterip-transition-qbgmj" Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Received response from host: affinity-clusterip-transition-qbgmj Feb 2 15:13:38.264: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-9477, will wait for the garbage collector to delete the pods Feb 2 15:13:38.399: INFO: Deleting ReplicationController affinity-clusterip-transition took: 40.84817ms Feb 2 15:13:38.600: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 200.958571ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:41.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9477" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":492,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:40.249: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should validate Deployment Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Deployment Feb 2 15:13:40.410: INFO: Creating simple deployment test-deployment-vwlwv Feb 2 15:13:40.541: INFO: deployment "test-deployment-vwlwv" doesn't have the required revision set Feb 2 15:13:42.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.February, 2, 15, 13, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 13, 40, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 13, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 13, 40, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-vwlwv-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Getting /status Feb 2 15:13:44.634: INFO: Deployment test-deployment-vwlwv has Conditions: [{Available True 2023-02-02 15:13:42 +0000 UTC 2023-02-02 15:13:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-02-02 15:13:42 +0000 UTC 2023-02-02 15:13:40 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-vwlwv-764bc7c4b7" has successfully progressed.}] �[1mSTEP�[0m: updating Deployment Status Feb 2 15:13:44.657: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 13, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 13, 42, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 13, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 13, 40, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-vwlwv-764bc7c4b7\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Deployment status to be updated Feb 2 15:13:44.662: INFO: Observed &Deployment event: ADDED Feb 2 15:13:44.663: INFO: Observed Deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-02-02 15:13:40 +0000 UTC 2023-02-02 15:13:40 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-vwlwv-764bc7c4b7"} Feb 2 15:13:44.663: INFO: Observed &Deployment event: MODIFIED Feb 2 15:13:44.664: INFO: Observed Deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-02-02 15:13:40 +0000 UTC 2023-02-02 15:13:40 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-vwlwv-764bc7c4b7"} Feb 2 15:13:44.664: INFO: Observed Deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-02-02 15:13:40 +0000 UTC 2023-02-02 15:13:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Feb 2 15:13:44.664: INFO: Observed &Deployment event: MODIFIED Feb 2 15:13:44.664: INFO: Observed Deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-02-02 15:13:40 +0000 UTC 2023-02-02 15:13:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Feb 2 15:13:44.664: INFO: Observed Deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-02-02 15:13:40 +0000 UTC 2023-02-02 15:13:40 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-vwlwv-764bc7c4b7" is progressing.} Feb 2 15:13:44.664: INFO: Observed &Deployment event: MODIFIED Feb 2 15:13:44.664: INFO: Observed Deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-02-02 15:13:42 +0000 UTC 2023-02-02 15:13:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Feb 2 15:13:44.664: INFO: Observed Deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-02-02 15:13:42 +0000 UTC 2023-02-02 15:13:40 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-vwlwv-764bc7c4b7" has successfully progressed.} Feb 2 15:13:44.665: INFO: Observed &Deployment event: MODIFIED Feb 2 15:13:44.665: INFO: Observed Deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-02-02 15:13:42 +0000 UTC 2023-02-02 15:13:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Feb 2 15:13:44.665: INFO: Observed Deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-02-02 15:13:42 +0000 UTC 2023-02-02 15:13:40 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-vwlwv-764bc7c4b7" has successfully progressed.} Feb 2 15:13:44.665: INFO: Found Deployment test-deployment-vwlwv in namespace deployment-809 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Feb 2 15:13:44.665: INFO: Deployment test-deployment-vwlwv has an updated status �[1mSTEP�[0m: patching the Statefulset Status Feb 2 15:13:44.665: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Feb 2 15:13:44.698: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Deployment status to be patched Feb 2 15:13:44.706: INFO: Observed &Deployment event: ADDED Feb 2 15:13:44.706: INFO: Observed deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-02-02 15:13:40 +0000 UTC 2023-02-02 15:13:40 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-vwlwv-764bc7c4b7"} Feb 2 15:13:44.708: INFO: Observed &Deployment event: MODIFIED Feb 2 15:13:44.710: INFO: Observed deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-02-02 15:13:40 +0000 UTC 2023-02-02 15:13:40 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-vwlwv-764bc7c4b7"} Feb 2 15:13:44.711: INFO: Observed deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-02-02 15:13:40 +0000 UTC 2023-02-02 15:13:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Feb 2 15:13:44.711: INFO: Observed &Deployment event: MODIFIED Feb 2 15:13:44.712: INFO: Observed deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-02-02 15:13:40 +0000 UTC 2023-02-02 15:13:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Feb 2 15:13:44.712: INFO: Observed deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-02-02 15:13:40 +0000 UTC 2023-02-02 15:13:40 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-vwlwv-764bc7c4b7" is progressing.} Feb 2 15:13:44.712: INFO: Observed &Deployment event: MODIFIED Feb 2 15:13:44.712: INFO: Observed deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-02-02 15:13:42 +0000 UTC 2023-02-02 15:13:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Feb 2 15:13:44.712: INFO: Observed deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-02-02 15:13:42 +0000 UTC 2023-02-02 15:13:40 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-vwlwv-764bc7c4b7" has successfully progressed.} Feb 2 15:13:44.712: INFO: Observed &Deployment event: MODIFIED Feb 2 15:13:44.713: INFO: Observed deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-02-02 15:13:42 +0000 UTC 2023-02-02 15:13:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Feb 2 15:13:44.713: INFO: Observed deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-02-02 15:13:42 +0000 UTC 2023-02-02 15:13:40 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-vwlwv-764bc7c4b7" has successfully progressed.} Feb 2 15:13:44.713: INFO: Observed deployment test-deployment-vwlwv in namespace deployment-809 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Feb 2 15:13:44.714: INFO: Observed &Deployment event: MODIFIED Feb 2 15:13:44.714: INFO: Found deployment test-deployment-vwlwv in namespace deployment-809 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } Feb 2 15:13:44.714: INFO: Deployment test-deployment-vwlwv has a patched status [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Feb 2 15:13:44.723: INFO: Deployment "test-deployment-vwlwv": &Deployment{ObjectMeta:{test-deployment-vwlwv deployment-809 7cf91907-b03e-4514-b848-5f5a733e6936 8619 1 2023-02-02 15:13:40 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-02-02 15:13:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2023-02-02 15:13:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2023-02-02 15:13:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0068b81a8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-vwlwv-764bc7c4b7",LastUpdateTime:2023-02-02 15:13:44 +0000 UTC,LastTransitionTime:2023-02-02 15:13:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 2 15:13:44.731: INFO: New ReplicaSet "test-deployment-vwlwv-764bc7c4b7" of Deployment "test-deployment-vwlwv": &ReplicaSet{ObjectMeta:{test-deployment-vwlwv-764bc7c4b7 deployment-809 7cfc9764-7b2a-4091-8ced-541193b0f36c 8437 1 2023-02-02 15:13:40 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-vwlwv 7cf91907-b03e-4514-b848-5f5a733e6936 0xc0068b8567 0xc0068b8568}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:13:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7cf91907-b03e-4514-b848-5f5a733e6936\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:13:42 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 764bc7c4b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0068b8618 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:13:44.739: INFO: Pod "test-deployment-vwlwv-764bc7c4b7-wzwld" is available: &Pod{ObjectMeta:{test-deployment-vwlwv-764bc7c4b7-wzwld test-deployment-vwlwv-764bc7c4b7- deployment-809 54434bcc-0814-43f9-84c7-4ecc56ab3ee8 8435 0 2023-02-02 15:13:40 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [{apps/v1 ReplicaSet test-deployment-vwlwv-764bc7c4b7 7cfc9764-7b2a-4091-8ced-541193b0f36c 0xc0068b89a7 0xc0068b89a8}] [] [{kube-controller-manager Update v1 2023-02-02 15:13:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7cfc9764-7b2a-4091-8ced-541193b0f36c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:13:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.70\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5qtj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5qtj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:13:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.70,StartTime:2023-02-02 15:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-02-02 15:13:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://b5ca7fccef31521fda3d855b6485ef52b64864a8c7f987d54aa6228f352db35f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:44.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-809" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":35,"skipped":631,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:41.529: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:13:42.810: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 15:13:44.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.February, 2, 15, 13, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 13, 42, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 13, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 13, 42, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:13:47.866: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating configmap webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:47.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8547" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8547-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":27,"skipped":513,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:44.805: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod Feb 2 15:13:44.872: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:49.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-4587" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":36,"skipped":638,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:50.109: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Getting a ResourceQuota �[1mSTEP�[0m: Updating a ResourceQuota �[1mSTEP�[0m: Verifying a ResourceQuota was modified �[1mSTEP�[0m: Deleting a ResourceQuota �[1mSTEP�[0m: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:50.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-4338" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":37,"skipped":686,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:50.513: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:13:52.609: INFO: Deleting pod "var-expansion-d75d2d00-0025-498e-94dd-1bfae2d2bbc3" in namespace "var-expansion-392" Feb 2 15:13:52.622: INFO: Wait up to 5m0s for pod "var-expansion-d75d2d00-0025-498e-94dd-1bfae2d2bbc3" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:54.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-392" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":38,"skipped":712,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:54.683: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io/v1 discovery document �[1mSTEP�[0m: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:54.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-254" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":39,"skipped":716,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:54.998: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:55.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-3890" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":40,"skipped":794,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:48.199: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:13:49.650: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:13:52.723: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:13:52.734: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-534-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource while v1 is storage version �[1mSTEP�[0m: Patching Custom Resource Definition to set v2 as storage �[1mSTEP�[0m: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:56.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5555" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5555-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":28,"skipped":523,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:55.382: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Feb 2 15:13:55.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d5e181b-7380-495e-871e-963119f27fd4" in namespace "projected-2464" to be "Succeeded or Failed" Feb 2 15:13:55.464: INFO: Pod "downwardapi-volume-3d5e181b-7380-495e-871e-963119f27fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.818669ms Feb 2 15:13:57.474: INFO: Pod "downwardapi-volume-3d5e181b-7380-495e-871e-963119f27fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016834466s Feb 2 15:13:59.482: INFO: Pod "downwardapi-volume-3d5e181b-7380-495e-871e-963119f27fd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024444612s �[1mSTEP�[0m: Saw pod success Feb 2 15:13:59.482: INFO: Pod "downwardapi-volume-3d5e181b-7380-495e-871e-963119f27fd4" satisfied condition "Succeeded or Failed" Feb 2 15:13:59.487: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod downwardapi-volume-3d5e181b-7380-495e-871e-963119f27fd4 container client-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:13:59.529: INFO: Waiting for pod downwardapi-volume-3d5e181b-7380-495e-871e-963119f27fd4 to disappear Feb 2 15:13:59.538: INFO: Pod downwardapi-volume-3d5e181b-7380-495e-871e-963119f27fd4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:13:59.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2464" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":816,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:59.565: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override arguments Feb 2 15:13:59.615: INFO: Waiting up to 5m0s for pod "client-containers-e4b17176-3e0b-4913-8d56-96e8babbbe40" in namespace "containers-7292" to be "Succeeded or Failed" Feb 2 15:13:59.621: INFO: Pod "client-containers-e4b17176-3e0b-4913-8d56-96e8babbbe40": Phase="Pending", Reason="", readiness=false. Elapsed: 5.230715ms Feb 2 15:14:01.630: INFO: Pod "client-containers-e4b17176-3e0b-4913-8d56-96e8babbbe40": Phase="Running", Reason="", readiness=true. Elapsed: 2.014352995s Feb 2 15:14:03.650: INFO: Pod "client-containers-e4b17176-3e0b-4913-8d56-96e8babbbe40": Phase="Running", Reason="", readiness=false. Elapsed: 4.034691623s Feb 2 15:14:05.657: INFO: Pod "client-containers-e4b17176-3e0b-4913-8d56-96e8babbbe40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041465377s �[1mSTEP�[0m: Saw pod success Feb 2 15:14:05.657: INFO: Pod "client-containers-e4b17176-3e0b-4913-8d56-96e8babbbe40" satisfied condition "Succeeded or Failed" Feb 2 15:14:05.664: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod client-containers-e4b17176-3e0b-4913-8d56-96e8babbbe40 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:14:05.696: INFO: Waiting for pod client-containers-e4b17176-3e0b-4913-8d56-96e8babbbe40 to disappear Feb 2 15:14:05.707: INFO: Pod client-containers-e4b17176-3e0b-4913-8d56-96e8babbbe40 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:05.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-7292" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":818,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:06.045: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-c34a9838-6365-47a9-98c2-abb171755d3d �[1mSTEP�[0m: Creating a pod to test consume secrets Feb 2 15:14:06.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ae5adcd2-5676-4bd2-aabf-97e4e6d78a8c" in namespace "projected-9148" to be "Succeeded or Failed" Feb 2 15:14:06.135: INFO: Pod "pod-projected-secrets-ae5adcd2-5676-4bd2-aabf-97e4e6d78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.550843ms Feb 2 15:14:08.142: INFO: Pod "pod-projected-secrets-ae5adcd2-5676-4bd2-aabf-97e4e6d78a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020240094s Feb 2 15:14:10.152: INFO: Pod "pod-projected-secrets-ae5adcd2-5676-4bd2-aabf-97e4e6d78a8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030028263s �[1mSTEP�[0m: Saw pod success Feb 2 15:14:10.152: INFO: Pod "pod-projected-secrets-ae5adcd2-5676-4bd2-aabf-97e4e6d78a8c" satisfied condition "Succeeded or Failed" Feb 2 15:14:10.158: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod pod-projected-secrets-ae5adcd2-5676-4bd2-aabf-97e4e6d78a8c container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:14:10.199: INFO: Waiting for pod pod-projected-secrets-ae5adcd2-5676-4bd2-aabf-97e4e6d78a8c to disappear Feb 2 15:14:10.206: INFO: Pod pod-projected-secrets-ae5adcd2-5676-4bd2-aabf-97e4e6d78a8c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:10.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9148" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":892,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:10.281: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Feb 2 15:14:10.353: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5ab83b2-7fc4-47a1-94d2-6ea6aaa9e41b" in namespace "downward-api-2824" to be "Succeeded or Failed" Feb 2 15:14:10.373: INFO: Pod "downwardapi-volume-f5ab83b2-7fc4-47a1-94d2-6ea6aaa9e41b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.012183ms Feb 2 15:14:12.384: INFO: Pod "downwardapi-volume-f5ab83b2-7fc4-47a1-94d2-6ea6aaa9e41b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030544611s Feb 2 15:14:14.394: INFO: Pod "downwardapi-volume-f5ab83b2-7fc4-47a1-94d2-6ea6aaa9e41b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041143921s �[1mSTEP�[0m: Saw pod success Feb 2 15:14:14.395: INFO: Pod "downwardapi-volume-f5ab83b2-7fc4-47a1-94d2-6ea6aaa9e41b" satisfied condition "Succeeded or Failed" Feb 2 15:14:14.400: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod downwardapi-volume-f5ab83b2-7fc4-47a1-94d2-6ea6aaa9e41b container client-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:14:14.431: INFO: Waiting for pod downwardapi-volume-f5ab83b2-7fc4-47a1-94d2-6ea6aaa9e41b to disappear Feb 2 15:14:14.444: INFO: Pod downwardapi-volume-f5ab83b2-7fc4-47a1-94d2-6ea6aaa9e41b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:14.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2824" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":909,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:14.614: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name secret-emptykey-test-c7168c34-38a6-494d-ad27-b34a3d7b8a28 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:14.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-3690" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":45,"skipped":952,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:14.860: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Feb 2 15:14:14.926: INFO: Waiting up to 5m0s for pod "downward-api-1ebec510-abf6-4624-99b9-ac1470400bbf" in namespace "downward-api-3602" to be "Succeeded or Failed" Feb 2 15:14:14.932: INFO: Pod "downward-api-1ebec510-abf6-4624-99b9-ac1470400bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.988244ms Feb 2 15:14:16.938: INFO: Pod "downward-api-1ebec510-abf6-4624-99b9-ac1470400bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012250881s Feb 2 15:14:18.946: INFO: Pod "downward-api-1ebec510-abf6-4624-99b9-ac1470400bbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01992587s �[1mSTEP�[0m: Saw pod success Feb 2 15:14:18.946: INFO: Pod "downward-api-1ebec510-abf6-4624-99b9-ac1470400bbf" satisfied condition "Succeeded or Failed" Feb 2 15:14:18.953: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod downward-api-1ebec510-abf6-4624-99b9-ac1470400bbf container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:14:18.988: INFO: Waiting for pod downward-api-1ebec510-abf6-4624-99b9-ac1470400bbf to disappear Feb 2 15:14:18.994: INFO: Pod downward-api-1ebec510-abf6-4624-99b9-ac1470400bbf no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:18.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3602" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":987,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:56.506: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:13:56.605: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 2 15:14:01.614: INFO: Pod name rollover-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Feb 2 15:14:01.615: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 2 15:14:03.623: INFO: Creating deployment "test-rollover-deployment" Feb 2 15:14:03.641: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 2 15:14:05.657: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 2 15:14:05.670: INFO: Ensure that both replica sets have 1 created replica Feb 2 15:14:05.683: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 2 15:14:05.704: INFO: Updating deployment test-rollover-deployment Feb 2 15:14:05.704: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 2 15:14:07.721: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 2 15:14:07.736: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 2 15:14:07.769: INFO: all replica sets need to contain the pod-template-hash label Feb 2 15:14:07.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 15:14:09.784: INFO: all replica sets need to contain the pod-template-hash label Feb 2 15:14:09.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 7, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 15:14:11.785: INFO: all replica sets need to contain the pod-template-hash label Feb 2 15:14:11.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 7, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 15:14:13.785: INFO: all replica sets need to contain the pod-template-hash label Feb 2 15:14:13.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 7, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 15:14:15.785: INFO: all replica sets need to contain the pod-template-hash label Feb 2 15:14:15.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 7, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 15:14:17.788: INFO: all replica sets need to contain the pod-template-hash label Feb 2 15:14:17.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 14, 7, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 14, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 15:14:19.788: INFO: Feb 2 15:14:19.788: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Feb 2 15:14:19.826: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5928 e89276de-3b6d-4581-9be2-1015a3db9730 9449 2 2023-02-02 15:14:03 +0000 UTC <nil> <nil> map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-02-02 15:14:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037b9f48 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-02-02 15:14:03 +0000 UTC,LastTransitionTime:2023-02-02 15:14:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-77db6f9f48" has successfully progressed.,LastUpdateTime:2023-02-02 15:14:17 +0000 UTC,LastTransitionTime:2023-02-02 15:14:03 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 2 15:14:19.834: INFO: New ReplicaSet "test-rollover-deployment-77db6f9f48" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-77db6f9f48 deployment-5928 9199f138-8306-4c59-9fef-3e500a7e18d7 9439 2 2023-02-02 15:14:05 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:77db6f9f48] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e89276de-3b6d-4581-9be2-1015a3db9730 0xc003e18737 0xc003e18738}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:14:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e89276de-3b6d-4581-9be2-1015a3db9730\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:14:17 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 77db6f9f48,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:77db6f9f48] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003e187e8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:14:19.834: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 2 15:14:19.834: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5928 9622c982-e078-4b24-b8ab-95359797cdac 9448 2 2023-02-02 15:13:56 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e89276de-3b6d-4581-9be2-1015a3db9730 0xc003e18617 0xc003e18618}] [] [{e2e.test Update apps/v1 2023-02-02 15:13:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e89276de-3b6d-4581-9be2-1015a3db9730\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:14:17 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003e186d8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:14:19.834: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-784bc44b77 deployment-5928 de5ac892-b997-485e-9431-6df8ada6428a 9334 2 2023-02-02 15:14:03 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e89276de-3b6d-4581-9be2-1015a3db9730 0xc003e18847 0xc003e18848}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:14:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e89276de-3b6d-4581-9be2-1015a3db9730\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:14:05 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 784bc44b77,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003e188f8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:14:19.843: INFO: Pod "test-rollover-deployment-77db6f9f48-x4sr8" is available: &Pod{ObjectMeta:{test-rollover-deployment-77db6f9f48-x4sr8 test-rollover-deployment-77db6f9f48- deployment-5928 46f62e2b-e3dd-459a-92df-eb74fee54c4b 9356 0 2023-02-02 15:14:05 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:77db6f9f48] map[] [{apps/v1 ReplicaSet test-rollover-deployment-77db6f9f48 9199f138-8306-4c59-9fef-3e500a7e18d7 0xc003e18df7 0xc003e18df8}] [] [{kube-controller-manager Update v1 2023-02-02 15:14:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9199f138-8306-4c59-9fef-3e500a7e18d7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:14:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2nwht,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nwht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:14:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:14:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:14:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:14:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.73,StartTime:2023-02-02 15:14:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-02-02 15:14:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://6cf2102aedb378123c7ad4c43257f214317f06fe2c895eb6c6ae188c4a993abc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:19.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-5928" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":29,"skipped":579,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:23.520: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:13:23.661: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with known and required properties Feb 2 15:13:41.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 --namespace=crd-publish-openapi-932 create -f -' Feb 2 15:13:44.241: INFO: stderr: "" Feb 2 15:13:44.242: INFO: stdout: "e2e-test-crd-publish-openapi-5315-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 2 15:13:44.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 --namespace=crd-publish-openapi-932 delete e2e-test-crd-publish-openapi-5315-crds test-foo' Feb 2 15:13:44.442: INFO: stderr: "" Feb 2 15:13:44.442: INFO: stdout: "e2e-test-crd-publish-openapi-5315-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Feb 2 15:13:44.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 --namespace=crd-publish-openapi-932 apply -f -' Feb 2 15:13:45.061: INFO: stderr: "" Feb 2 15:13:45.061: INFO: stdout: "e2e-test-crd-publish-openapi-5315-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 2 15:13:45.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 --namespace=crd-publish-openapi-932 delete e2e-test-crd-publish-openapi-5315-crds test-foo' Feb 2 15:13:45.272: INFO: stderr: "" Feb 2 15:13:45.272: INFO: stdout: "e2e-test-crd-publish-openapi-5315-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with value outside defined enum values Feb 2 15:13:45.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 --namespace=crd-publish-openapi-932 create -f -' Feb 2 15:14:09.285: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Feb 2 15:14:09.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 --namespace=crd-publish-openapi-932 create -f -' Feb 2 15:14:11.473: INFO: rc: 1 Feb 2 15:14:11.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 --namespace=crd-publish-openapi-932 apply -f -' Feb 2 15:14:11.922: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request without required properties Feb 2 15:14:11.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 --namespace=crd-publish-openapi-932 create -f -' Feb 2 15:14:12.336: INFO: rc: 1 Feb 2 15:14:12.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 --namespace=crd-publish-openapi-932 apply -f -' Feb 2 15:14:12.790: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Feb 2 15:14:12.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 explain e2e-test-crd-publish-openapi-5315-crds' Feb 2 15:14:13.218: INFO: stderr: "" Feb 2 15:14:13.218: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5315-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Feb 2 15:14:13.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 explain e2e-test-crd-publish-openapi-5315-crds.metadata' Feb 2 15:14:13.639: INFO: stderr: "" Feb 2 15:14:13.639: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5315-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Feb 2 15:14:13.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 explain e2e-test-crd-publish-openapi-5315-crds.spec' Feb 2 15:14:14.094: INFO: stderr: "" Feb 2 15:14:14.094: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5315-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Feb 2 15:14:14.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 explain e2e-test-crd-publish-openapi-5315-crds.spec.bars' Feb 2 15:14:14.553: INFO: stderr: "" Feb 2 15:14:14.553: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5315-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Feb 2 15:14:14.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-932 explain e2e-test-crd-publish-openapi-5315-crds.spec.bars2' Feb 2 15:14:15.088: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:20.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-932" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":19,"skipped":533,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:19.127: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Feb 2 15:14:19.178: INFO: Waiting up to 5m0s for pod "pod-06488131-2b82-4151-9996-f484a15774a6" in namespace "emptydir-275" to be "Succeeded or Failed" Feb 2 15:14:19.193: INFO: Pod "pod-06488131-2b82-4151-9996-f484a15774a6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.236032ms Feb 2 15:14:21.201: INFO: Pod "pod-06488131-2b82-4151-9996-f484a15774a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022349497s Feb 2 15:14:23.210: INFO: Pod "pod-06488131-2b82-4151-9996-f484a15774a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03150814s �[1mSTEP�[0m: Saw pod success Feb 2 15:14:23.210: INFO: Pod "pod-06488131-2b82-4151-9996-f484a15774a6" satisfied condition "Succeeded or Failed" Feb 2 15:14:23.220: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-06488131-2b82-4151-9996-f484a15774a6 container test-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:14:23.252: INFO: Waiting for pod pod-06488131-2b82-4151-9996-f484a15774a6 to disappear Feb 2 15:14:23.257: INFO: Pod pod-06488131-2b82-4151-9996-f484a15774a6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:23.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-275" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":1015,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:19.941: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should block an eviction until the PDB is updated to allow it [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pdb that targets all three pods in a test replica set �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: First trying to evict a pod which shouldn't be evictable �[1mSTEP�[0m: Waiting for all pods to be running Feb 2 15:14:20.030: INFO: pods: 0 < 3 Feb 2 15:14:22.040: INFO: running pods: 2 < 3 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Updating the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: Waiting for the pdb to observed all healthy pods �[1mSTEP�[0m: Patching the pdb to disallow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Deleting the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be deleted �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:28.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-4202" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":30,"skipped":594,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:28.571: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:28.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8559" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":31,"skipped":646,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:23.353: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename limitrange �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a LimitRange �[1mSTEP�[0m: Setting up watch �[1mSTEP�[0m: Submitting a LimitRange Feb 2 15:14:23.424: INFO: observed the limitRanges list �[1mSTEP�[0m: Verifying LimitRange creation was observed �[1mSTEP�[0m: Fetching the LimitRange to ensure it has proper values Feb 2 15:14:23.433: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Feb 2 15:14:23.433: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with no resource requirements �[1mSTEP�[0m: Ensuring Pod has resource requirements applied from LimitRange Feb 2 15:14:23.455: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Feb 2 15:14:23.455: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with partial resource requirements �[1mSTEP�[0m: Ensuring Pod has merged resource requirements applied from LimitRange Feb 2 15:14:23.506: INFO: Verifying requests: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] Feb 2 15:14:23.506: INFO: Verifying limits: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Failing to create a Pod with less than min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Updating a LimitRange �[1mSTEP�[0m: Verifying LimitRange updating is effective �[1mSTEP�[0m: Creating a Pod with less than former min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Deleting a LimitRange �[1mSTEP�[0m: Verifying the LimitRange was deleted Feb 2 15:14:30.597: INFO: limitRange is already deleted �[1mSTEP�[0m: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:30.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "limitrange-2588" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:28.745: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating secret secrets-5255/secret-test-7b4d2347-b7a6-459b-a893-655b2c99af92 �[1mSTEP�[0m: Creating a pod to test consume secrets Feb 2 15:14:28.834: INFO: Waiting up to 5m0s for pod "pod-configmaps-303580dc-b983-4080-9539-78f7387ff11e" in namespace "secrets-5255" to be "Succeeded or Failed" Feb 2 15:14:28.839: INFO: Pod "pod-configmaps-303580dc-b983-4080-9539-78f7387ff11e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.24824ms Feb 2 15:14:30.846: INFO: Pod "pod-configmaps-303580dc-b983-4080-9539-78f7387ff11e": Phase="Running", Reason="", readiness=false. Elapsed: 2.01256449s Feb 2 15:14:32.852: INFO: Pod "pod-configmaps-303580dc-b983-4080-9539-78f7387ff11e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018302459s �[1mSTEP�[0m: Saw pod success Feb 2 15:14:32.852: INFO: Pod "pod-configmaps-303580dc-b983-4080-9539-78f7387ff11e" satisfied condition "Succeeded or Failed" Feb 2 15:14:32.857: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod pod-configmaps-303580dc-b983-4080-9539-78f7387ff11e container env-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:14:32.891: INFO: Waiting for pod pod-configmaps-303580dc-b983-4080-9539-78f7387ff11e to disappear Feb 2 15:14:32.898: INFO: Pod pod-configmaps-303580dc-b983-4080-9539-78f7387ff11e no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:32.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-5255" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":656,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:32.964: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pod templates Feb 2 15:14:33.002: INFO: created test-podtemplate-1 Feb 2 15:14:33.009: INFO: created test-podtemplate-2 Feb 2 15:14:33.016: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Feb 2 15:14:33.021: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Feb 2 15:14:33.044: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:33.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-5552" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":33,"skipped":677,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":48,"skipped":1035,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:30.634: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Feb 2 15:14:30.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6085b1b9-2227-4dd5-bffa-113934130cd4" in namespace "projected-1759" to be "Succeeded or Failed" Feb 2 15:14:30.715: INFO: Pod "downwardapi-volume-6085b1b9-2227-4dd5-bffa-113934130cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25523ms Feb 2 15:14:32.721: INFO: Pod "downwardapi-volume-6085b1b9-2227-4dd5-bffa-113934130cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016646227s Feb 2 15:14:34.792: INFO: Pod "downwardapi-volume-6085b1b9-2227-4dd5-bffa-113934130cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087503656s Feb 2 15:14:36.798: INFO: Pod "downwardapi-volume-6085b1b9-2227-4dd5-bffa-113934130cd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093942845s �[1mSTEP�[0m: Saw pod success Feb 2 15:14:36.799: INFO: Pod "downwardapi-volume-6085b1b9-2227-4dd5-bffa-113934130cd4" satisfied condition "Succeeded or Failed" Feb 2 15:14:36.803: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod downwardapi-volume-6085b1b9-2227-4dd5-bffa-113934130cd4 container client-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:14:36.833: INFO: Waiting for pod downwardapi-volume-6085b1b9-2227-4dd5-bffa-113934130cd4 to disappear Feb 2 15:14:36.843: INFO: Pod downwardapi-volume-6085b1b9-2227-4dd5-bffa-113934130cd4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:36.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1759" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1035,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:36.878: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override all Feb 2 15:14:36.967: INFO: Waiting up to 5m0s for pod "client-containers-71d15f7a-9b3b-4deb-9524-d0548ce5de43" in namespace "containers-1917" to be "Succeeded or Failed" Feb 2 15:14:36.976: INFO: Pod "client-containers-71d15f7a-9b3b-4deb-9524-d0548ce5de43": Phase="Pending", Reason="", readiness=false. Elapsed: 9.25176ms Feb 2 15:14:38.984: INFO: Pod "client-containers-71d15f7a-9b3b-4deb-9524-d0548ce5de43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016679118s Feb 2 15:14:40.992: INFO: Pod "client-containers-71d15f7a-9b3b-4deb-9524-d0548ce5de43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024398457s �[1mSTEP�[0m: Saw pod success Feb 2 15:14:40.992: INFO: Pod "client-containers-71d15f7a-9b3b-4deb-9524-d0548ce5de43" satisfied condition "Succeeded or Failed" Feb 2 15:14:41.001: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod client-containers-71d15f7a-9b3b-4deb-9524-d0548ce5de43 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:14:41.026: INFO: Waiting for pod client-containers-71d15f7a-9b3b-4deb-9524-d0548ce5de43 to disappear Feb 2 15:14:41.032: INFO: Pod client-containers-71d15f7a-9b3b-4deb-9524-d0548ce5de43 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:41.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-1917" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1035,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:41.369: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:14:42.708: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:14:45.772: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:14:45.778: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-8155-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:14:48.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1070" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1070-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":51,"skipped":1132,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:20.254: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Feb 2 15:14:20.314: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Feb 2 15:14:40.977: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:14:48.270: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:06.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-9719" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":20,"skipped":550,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:14:49.096: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: CRs in different groups (two CRDs) show up in OpenAPI documentation Feb 2 15:14:49.172: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:14:55.212: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:10.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-9893" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":52,"skipped":1133,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:06.897: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting the auto-created API token �[1mSTEP�[0m: reading a file in the container Feb 2 15:15:09.490: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4341 pod-service-account-f0ca6177-54d3-4e5b-98df-357b0a6ba83d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' �[1mSTEP�[0m: reading a file in the container Feb 2 15:15:09.853: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4341 pod-service-account-f0ca6177-54d3-4e5b-98df-357b0a6ba83d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' �[1mSTEP�[0m: reading a file in the container Feb 2 15:15:10.189: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4341 pod-service-account-f0ca6177-54d3-4e5b-98df-357b0a6ba83d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:10.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-4341" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":21,"skipped":558,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:10.619: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption is created Feb 2 15:15:10.706: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:15:12.713: INFO: The status of Pod pod-adoption is Running (Ready = true) �[1mSTEP�[0m: When a replication controller with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:13.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-6045" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":22,"skipped":567,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:10.320: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Feb 2 15:15:10.372: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:32.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-5260" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":53,"skipped":1158,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:32.861: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:15:32.921: INFO: Endpoints addresses: [172.18.0.9] , ports: [6443] Feb 2 15:15:32.921: INFO: EndpointSlices addresses: [172.18.0.9] , ports: [6443] [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:32.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-872" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":54,"skipped":1158,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:32.962: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-5cbaa456-fcb4-43a4-a312-442fd772048f �[1mSTEP�[0m: Creating a pod to test consume configMaps Feb 2 15:15:33.032: INFO: Waiting up to 5m0s for pod "pod-configmaps-548f554b-3b42-43db-bd8a-f5b62318c25c" in namespace "configmap-5538" to be "Succeeded or Failed" Feb 2 15:15:33.047: INFO: Pod "pod-configmaps-548f554b-3b42-43db-bd8a-f5b62318c25c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.332387ms Feb 2 15:15:35.055: INFO: Pod "pod-configmaps-548f554b-3b42-43db-bd8a-f5b62318c25c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023128724s Feb 2 15:15:37.062: INFO: Pod "pod-configmaps-548f554b-3b42-43db-bd8a-f5b62318c25c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030027928s �[1mSTEP�[0m: Saw pod success Feb 2 15:15:37.062: INFO: Pod "pod-configmaps-548f554b-3b42-43db-bd8a-f5b62318c25c" satisfied condition "Succeeded or Failed" Feb 2 15:15:37.068: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-configmaps-548f554b-3b42-43db-bd8a-f5b62318c25c container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:15:37.096: INFO: Waiting for pod pod-configmaps-548f554b-3b42-43db-bd8a-f5b62318c25c to disappear Feb 2 15:15:37.102: INFO: Pod pod-configmaps-548f554b-3b42-43db-bd8a-f5b62318c25c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:37.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5538" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":1163,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:37.130: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Feb 2 15:15:37.181: INFO: Waiting up to 5m0s for pod "pod-20c034b8-a5a6-4380-9153-2a5ed07ebfff" in namespace "emptydir-8888" to be "Succeeded or Failed" Feb 2 15:15:37.187: INFO: Pod "pod-20c034b8-a5a6-4380-9153-2a5ed07ebfff": Phase="Pending", Reason="", readiness=false. Elapsed: 5.471121ms Feb 2 15:15:39.194: INFO: Pod "pod-20c034b8-a5a6-4380-9153-2a5ed07ebfff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012774899s Feb 2 15:15:41.202: INFO: Pod "pod-20c034b8-a5a6-4380-9153-2a5ed07ebfff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020583329s �[1mSTEP�[0m: Saw pod success Feb 2 15:15:41.202: INFO: Pod "pod-20c034b8-a5a6-4380-9153-2a5ed07ebfff" satisfied condition "Succeeded or Failed" Feb 2 15:15:41.210: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-20c034b8-a5a6-4380-9153-2a5ed07ebfff container test-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:15:41.238: INFO: Waiting for pod pod-20c034b8-a5a6-4380-9153-2a5ed07ebfff to disappear Feb 2 15:15:41.244: INFO: Pod pod-20c034b8-a5a6-4380-9153-2a5ed07ebfff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:41.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-8888" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":1165,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:13.916: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: referencing a single matching pod �[1mSTEP�[0m: referencing matching pods with named port �[1mSTEP�[0m: creating empty Endpoints and EndpointSlices for no matching Pods �[1mSTEP�[0m: recreating EndpointSlices after they've been deleted Feb 2 15:15:34.259: INFO: EndpointSlice for Service endpointslice-6484/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:44.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-6484" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":23,"skipped":614,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:41.364: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name projected-secret-test-b05b4017-03b2-4355-bb58-ed5fd0568577 �[1mSTEP�[0m: Creating a pod to test consume secrets Feb 2 15:15:41.426: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e7a43ad8-3cda-4289-be7e-232f166cfddc" in namespace "projected-3092" to be "Succeeded or Failed" Feb 2 15:15:41.432: INFO: Pod "pod-projected-secrets-e7a43ad8-3cda-4289-be7e-232f166cfddc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162275ms Feb 2 15:15:43.438: INFO: Pod "pod-projected-secrets-e7a43ad8-3cda-4289-be7e-232f166cfddc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012611234s Feb 2 15:15:45.445: INFO: Pod "pod-projected-secrets-e7a43ad8-3cda-4289-be7e-232f166cfddc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019431173s �[1mSTEP�[0m: Saw pod success Feb 2 15:15:45.445: INFO: Pod "pod-projected-secrets-e7a43ad8-3cda-4289-be7e-232f166cfddc" satisfied condition "Succeeded or Failed" Feb 2 15:15:45.451: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-projected-secrets-e7a43ad8-3cda-4289-be7e-232f166cfddc container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:15:45.483: INFO: Waiting for pod pod-projected-secrets-e7a43ad8-3cda-4289-be7e-232f166cfddc to disappear Feb 2 15:15:45.490: INFO: Pod pod-projected-secrets-e7a43ad8-3cda-4289-be7e-232f166cfddc no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:45.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3092" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1197,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:44.306: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should validate Replicaset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create a Replicaset �[1mSTEP�[0m: Verify that the required pods have come up. Feb 2 15:15:44.371: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 2 15:15:49.375: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: Getting /status Feb 2 15:15:49.385: INFO: Replicaset test-rs has Conditions: [] �[1mSTEP�[0m: updating the Replicaset Status Feb 2 15:15:49.398: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the ReplicaSet status to be updated Feb 2 15:15:49.402: INFO: Observed &ReplicaSet event: ADDED Feb 2 15:15:49.402: INFO: Observed &ReplicaSet event: MODIFIED Feb 2 15:15:49.402: INFO: Observed &ReplicaSet event: MODIFIED Feb 2 15:15:49.402: INFO: Observed &ReplicaSet event: MODIFIED Feb 2 15:15:49.402: INFO: Found replicaset test-rs in namespace replicaset-3015 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Feb 2 15:15:49.402: INFO: Replicaset test-rs has an updated status �[1mSTEP�[0m: patching the Replicaset Status Feb 2 15:15:49.402: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Feb 2 15:15:49.412: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Replicaset status to be patched Feb 2 15:15:49.416: INFO: Observed &ReplicaSet event: ADDED Feb 2 15:15:49.416: INFO: Observed &ReplicaSet event: MODIFIED Feb 2 15:15:49.416: INFO: Observed &ReplicaSet event: MODIFIED Feb 2 15:15:49.417: INFO: Observed &ReplicaSet event: MODIFIED Feb 2 15:15:49.417: INFO: Observed replicaset test-rs in namespace replicaset-3015 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Feb 2 15:15:49.417: INFO: Observed &ReplicaSet event: MODIFIED Feb 2 15:15:49.417: INFO: Found replicaset test-rs in namespace replicaset-3015 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } Feb 2 15:15:49.417: INFO: Replicaset test-rs has a patched status [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:49.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-3015" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":24,"skipped":617,"failed":0} [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:49.470: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running Feb 2 15:15:49.692: INFO: running pods: 0 < 3 Feb 2 15:15:51.701: INFO: running pods: 2 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:53.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-9186" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":25,"skipped":617,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:53.751: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-map-425220a2-6b15-494d-8db9-8032e67d7609 �[1mSTEP�[0m: Creating a pod to test consume secrets Feb 2 15:15:53.859: INFO: Waiting up to 5m0s for pod "pod-secrets-5b205376-8575-412b-8a81-3546152fbf91" in namespace "secrets-1909" to be "Succeeded or Failed" Feb 2 15:15:53.873: INFO: Pod "pod-secrets-5b205376-8575-412b-8a81-3546152fbf91": Phase="Pending", Reason="", readiness=false. Elapsed: 13.598491ms Feb 2 15:15:55.882: INFO: Pod "pod-secrets-5b205376-8575-412b-8a81-3546152fbf91": Phase="Running", Reason="", readiness=true. Elapsed: 2.022410928s Feb 2 15:15:57.890: INFO: Pod "pod-secrets-5b205376-8575-412b-8a81-3546152fbf91": Phase="Running", Reason="", readiness=false. Elapsed: 4.030127297s Feb 2 15:15:59.897: INFO: Pod "pod-secrets-5b205376-8575-412b-8a81-3546152fbf91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036770858s �[1mSTEP�[0m: Saw pod success Feb 2 15:15:59.897: INFO: Pod "pod-secrets-5b205376-8575-412b-8a81-3546152fbf91" satisfied condition "Succeeded or Failed" Feb 2 15:15:59.901: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod pod-secrets-5b205376-8575-412b-8a81-3546152fbf91 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:15:59.929: INFO: Waiting for pod pod-secrets-5b205376-8575-412b-8a81-3546152fbf91 to disappear Feb 2 15:15:59.949: INFO: Pod pod-secrets-5b205376-8575-412b-8a81-3546152fbf91 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:15:59.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1909" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":619,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:59.979: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:16:02.069: INFO: Deleting pod "var-expansion-ece9314b-ca10-4934-bdd3-bd5315c7f2c4" in namespace "var-expansion-938" Feb 2 15:16:02.082: INFO: Wait up to 5m0s for pod "var-expansion-ece9314b-ca10-4934-bdd3-bd5315c7f2c4" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:04.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-938" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":27,"skipped":622,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:04.141: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption-release is created Feb 2 15:16:04.214: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:16:06.225: INFO: The status of Pod pod-adoption-release is Running (Ready = true) �[1mSTEP�[0m: When a replicaset with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted �[1mSTEP�[0m: When the matched label of one of its pods change Feb 2 15:16:07.269: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:08.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-5223" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":28,"skipped":629,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:15:45.523: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-jdlx �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Feb 2 15:15:45.585: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jdlx" in namespace "subpath-5632" to be "Succeeded or Failed" Feb 2 15:15:45.590: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Pending", Reason="", readiness=false. Elapsed: 5.522209ms Feb 2 15:15:47.596: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=true. Elapsed: 2.011792708s Feb 2 15:15:49.615: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=true. Elapsed: 4.030195998s Feb 2 15:15:51.623: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=true. Elapsed: 6.038461931s Feb 2 15:15:53.629: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=true. Elapsed: 8.044749504s Feb 2 15:15:55.638: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=true. Elapsed: 10.052854832s Feb 2 15:15:57.646: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=true. Elapsed: 12.061294917s Feb 2 15:15:59.655: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=true. Elapsed: 14.069899056s Feb 2 15:16:01.664: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=true. Elapsed: 16.079473082s Feb 2 15:16:03.673: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=true. Elapsed: 18.0885406s Feb 2 15:16:05.680: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=true. Elapsed: 20.095619074s Feb 2 15:16:07.689: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Running", Reason="", readiness=false. Elapsed: 22.104462952s Feb 2 15:16:09.697: INFO: Pod "pod-subpath-test-downwardapi-jdlx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.112184821s �[1mSTEP�[0m: Saw pod success Feb 2 15:16:09.697: INFO: Pod "pod-subpath-test-downwardapi-jdlx" satisfied condition "Succeeded or Failed" Feb 2 15:16:09.703: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-subpath-test-downwardapi-jdlx container test-container-subpath-downwardapi-jdlx: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:16:09.730: INFO: Waiting for pod pod-subpath-test-downwardapi-jdlx to disappear Feb 2 15:16:09.736: INFO: Pod pod-subpath-test-downwardapi-jdlx no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-jdlx Feb 2 15:16:09.736: INFO: Deleting pod "pod-subpath-test-downwardapi-jdlx" in namespace "subpath-5632" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:09.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-5632" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":58,"skipped":1202,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:09.892: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2517.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2517.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2517.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2517.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Feb 2 15:16:14.004: INFO: DNS probes using dns-2517/dns-test-8f18a6bb-69a3-46d0-8bf1-d25837f8f53f succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:14.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-2517" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":59,"skipped":1238,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:08.400: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:16:08.445: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:14.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-7770" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":29,"skipped":641,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:14.056: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test substitution in container's command Feb 2 15:16:14.125: INFO: Waiting up to 5m0s for pod "var-expansion-c8950267-0b72-4481-9a98-d742940e2dcd" in namespace "var-expansion-5141" to be "Succeeded or Failed" Feb 2 15:16:14.132: INFO: Pod "var-expansion-c8950267-0b72-4481-9a98-d742940e2dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.348861ms Feb 2 15:16:16.141: INFO: Pod "var-expansion-c8950267-0b72-4481-9a98-d742940e2dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015789345s Feb 2 15:16:18.151: INFO: Pod "var-expansion-c8950267-0b72-4481-9a98-d742940e2dcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025351252s �[1mSTEP�[0m: Saw pod success Feb 2 15:16:18.151: INFO: Pod "var-expansion-c8950267-0b72-4481-9a98-d742940e2dcd" satisfied condition "Succeeded or Failed" Feb 2 15:16:18.157: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod var-expansion-c8950267-0b72-4481-9a98-d742940e2dcd container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:16:18.185: INFO: Waiting for pod var-expansion-c8950267-0b72-4481-9a98-d742940e2dcd to disappear Feb 2 15:16:18.190: INFO: Pod var-expansion-c8950267-0b72-4481-9a98-d742940e2dcd no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:18.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-5141" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1242,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:15.003: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Failed �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Feb 2 15:16:19.105: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:19.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-7076" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":671,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:19.307: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:16:20.329: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:16:23.378: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the crd webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource definition that should be denied by the webhook Feb 2 15:16:23.411: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:23.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-867" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-867-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":31,"skipped":715,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:18.220: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Feb 2 15:16:18.292: INFO: The status of Pod annotationupdate9f409137-2322-4501-bf8e-f5f574632487 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:16:20.300: INFO: The status of Pod annotationupdate9f409137-2322-4501-bf8e-f5f574632487 is Running (Ready = true) Feb 2 15:16:20.861: INFO: Successfully updated pod "annotationupdate9f409137-2322-4501-bf8e-f5f574632487" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:24.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4775" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":1244,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:23.626: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8788.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8788.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8788.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8788.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Feb 2 15:16:27.765: INFO: DNS probes using dns-8788/dns-test-df0185e2-d119-4049-bf49-3f6036e1d904 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:27.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-8788" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":716,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:27.869: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota with terminating scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not terminating scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a long running pod �[1mSTEP�[0m: Ensuring resource quota with not terminating scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with terminating scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a terminating pod �[1mSTEP�[0m: Ensuring resource quota with terminating scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not terminating scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:44.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-7099" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":33,"skipped":724,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:44.171: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on node default medium Feb 2 15:16:44.231: INFO: Waiting up to 5m0s for pod "pod-0229f5b1-9ee1-406a-86e0-c18d14ebdbb1" in namespace "emptydir-2181" to be "Succeeded or Failed" Feb 2 15:16:44.243: INFO: Pod "pod-0229f5b1-9ee1-406a-86e0-c18d14ebdbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.560906ms Feb 2 15:16:46.251: INFO: Pod "pod-0229f5b1-9ee1-406a-86e0-c18d14ebdbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020223821s Feb 2 15:16:48.258: INFO: Pod "pod-0229f5b1-9ee1-406a-86e0-c18d14ebdbb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027114847s �[1mSTEP�[0m: Saw pod success Feb 2 15:16:48.258: INFO: Pod "pod-0229f5b1-9ee1-406a-86e0-c18d14ebdbb1" satisfied condition "Succeeded or Failed" Feb 2 15:16:48.264: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-0229f5b1-9ee1-406a-86e0-c18d14ebdbb1 container test-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:16:48.289: INFO: Waiting for pod pod-0229f5b1-9ee1-406a-86e0-c18d14ebdbb1 to disappear Feb 2 15:16:48.292: INFO: Pod pod-0229f5b1-9ee1-406a-86e0-c18d14ebdbb1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:48.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2181" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:25.066: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-6229 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Feb 2 15:16:25.135: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 2 15:16:25.247: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:16:27.253: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:16:29.254: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 15:16:31.255: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 15:16:33.254: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 15:16:35.254: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 15:16:37.252: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 15:16:39.256: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 15:16:41.256: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 15:16:43.255: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 15:16:45.256: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 15:16:47.253: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 2 15:16:47.263: INFO: The status of Pod netserver-1 is Running (Ready = true) Feb 2 15:16:47.274: INFO: The status of Pod netserver-2 is Running (Ready = true) Feb 2 15:16:47.285: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Feb 2 15:16:49.316: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Feb 2 15:16:49.316: INFO: Breadth first check of 192.168.1.57 on host 172.18.0.7... Feb 2 15:16:49.323: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.56:9080/dial?request=hostname&protocol=http&host=192.168.1.57&port=8083&tries=1'] Namespace:pod-network-test-6229 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:16:49.323: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:16:49.324: INFO: ExecWithOptions: Clientset creation Feb 2 15:16:49.325: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6229/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.6.56%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.57%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:16:49.590: INFO: Waiting for responses: map[] Feb 2 15:16:49.590: INFO: reached 192.168.1.57 after 0/1 tries Feb 2 15:16:49.590: INFO: Breadth first check of 192.168.0.94 on host 172.18.0.4... Feb 2 15:16:49.597: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.56:9080/dial?request=hostname&protocol=http&host=192.168.0.94&port=8083&tries=1'] Namespace:pod-network-test-6229 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:16:49.597: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:16:49.597: INFO: ExecWithOptions: Clientset creation Feb 2 15:16:49.597: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6229/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.6.56%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.94%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:16:49.790: INFO: Waiting for responses: map[] Feb 2 15:16:49.790: INFO: reached 192.168.0.94 after 0/1 tries Feb 2 15:16:49.790: INFO: Breadth first check of 192.168.6.55 on host 172.18.0.5... Feb 2 15:16:49.796: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.56:9080/dial?request=hostname&protocol=http&host=192.168.6.55&port=8083&tries=1'] Namespace:pod-network-test-6229 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:16:49.796: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:16:49.797: INFO: ExecWithOptions: Clientset creation Feb 2 15:16:49.797: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6229/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.6.56%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.6.55%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:16:49.948: INFO: Waiting for responses: map[] Feb 2 15:16:49.948: INFO: reached 192.168.6.55 after 0/1 tries Feb 2 15:16:49.948: INFO: Breadth first check of 192.168.2.50 on host 172.18.0.6... Feb 2 15:16:49.953: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.56:9080/dial?request=hostname&protocol=http&host=192.168.2.50&port=8083&tries=1'] Namespace:pod-network-test-6229 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:16:49.954: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:16:49.955: INFO: ExecWithOptions: Clientset creation Feb 2 15:16:49.955: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6229/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.6.56%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.50%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Feb 2 15:16:50.121: INFO: Waiting for responses: map[] Feb 2 15:16:50.121: INFO: reached 192.168.2.50 after 0/1 tries Feb 2 15:16:50.121: INFO: Going to retry 0 out of 4 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:50.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-6229" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1269,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":756,"failed":0} [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:48.307: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should list and delete a collection of ReplicaSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create a ReplicaSet �[1mSTEP�[0m: Verify that the required pods have come up Feb 2 15:16:48.364: INFO: Pod name sample-pod: Found 0 pods out of 3 Feb 2 15:16:53.373: INFO: Pod name sample-pod: Found 3 pods out of 3 �[1mSTEP�[0m: ensuring each pod is running Feb 2 15:16:53.378: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} �[1mSTEP�[0m: Listing all ReplicaSets �[1mSTEP�[0m: DeleteCollection of the ReplicaSets �[1mSTEP�[0m: After DeleteCollection verify that ReplicaSets have been deleted [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:53.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-6642" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":35,"skipped":756,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:53.433: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 �[1mSTEP�[0m: creating the pod Feb 2 15:16:53.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9477 create -f -' Feb 2 15:16:55.588: INFO: stderr: "" Feb 2 15:16:55.589: INFO: stdout: "pod/pause created\n" Feb 2 15:16:55.589: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 2 15:16:55.589: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9477" to be "running and ready" Feb 2 15:16:55.598: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.788009ms Feb 2 15:16:57.606: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.01686712s Feb 2 15:16:57.606: INFO: Pod "pause" satisfied condition "running and ready" Feb 2 15:16:57.606: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: adding the label testing-label with value testing-label-value to a pod Feb 2 15:16:57.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9477 label pods pause testing-label=testing-label-value' Feb 2 15:16:57.800: INFO: stderr: "" Feb 2 15:16:57.800: INFO: stdout: "pod/pause labeled\n" �[1mSTEP�[0m: verifying the pod has the label testing-label with the value testing-label-value Feb 2 15:16:57.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9477 get pod pause -L testing-label' Feb 2 15:16:57.957: INFO: stderr: "" Feb 2 15:16:57.957: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" �[1mSTEP�[0m: removing the label testing-label of a pod Feb 2 15:16:57.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9477 label pods pause testing-label-' Feb 2 15:16:58.128: INFO: stderr: "" Feb 2 15:16:58.129: INFO: stdout: "pod/pause unlabeled\n" �[1mSTEP�[0m: verifying the pod doesn't have the label testing-label Feb 2 15:16:58.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9477 get pod pause -L testing-label' Feb 2 15:16:58.294: INFO: stderr: "" Feb 2 15:16:58.294: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1339 �[1mSTEP�[0m: using delete to clean up resources Feb 2 15:16:58.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9477 delete --grace-period=0 --force -f -' Feb 2 15:16:58.501: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 15:16:58.501: INFO: stdout: "pod \"pause\" force deleted\n" Feb 2 15:16:58.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9477 get rc,svc -l name=pause --no-headers' Feb 2 15:16:58.725: INFO: stderr: "No resources found in kubectl-9477 namespace.\n" Feb 2 15:16:58.725: INFO: stdout: "" Feb 2 15:16:58.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9477 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 2 15:16:58.884: INFO: stderr: "" Feb 2 15:16:58.884: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:58.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9477" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":36,"skipped":759,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:58.979: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:16:59.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-2899" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":37,"skipped":785,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:50.158: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-c74c37fd-9e98-4585-9d49-11ce95ca6ee2 Feb 2 15:16:50.224: INFO: Pod name my-hostname-basic-c74c37fd-9e98-4585-9d49-11ce95ca6ee2: Found 0 pods out of 1 Feb 2 15:16:55.229: INFO: Pod name my-hostname-basic-c74c37fd-9e98-4585-9d49-11ce95ca6ee2: Found 1 pods out of 1 Feb 2 15:16:55.230: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c74c37fd-9e98-4585-9d49-11ce95ca6ee2" are running Feb 2 15:16:55.234: INFO: Pod "my-hostname-basic-c74c37fd-9e98-4585-9d49-11ce95ca6ee2-m9dcc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-02-02 15:16:50 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-02-02 15:16:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-02-02 15:16:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-02-02 15:16:50 +0000 UTC Reason: Message:}]) Feb 2 15:16:55.234: INFO: Trying to dial the pod Feb 2 15:17:00.258: INFO: Controller my-hostname-basic-c74c37fd-9e98-4585-9d49-11ce95ca6ee2: Got expected result from replica 1 [my-hostname-basic-c74c37fd-9e98-4585-9d49-11ce95ca6ee2-m9dcc]: "my-hostname-basic-c74c37fd-9e98-4585-9d49-11ce95ca6ee2-m9dcc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:00.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-5333" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":63,"skipped":1272,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:16:59.115: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-3119ec8a-0cdd-4eb3-b2ac-29103cf5b95f �[1mSTEP�[0m: Creating a pod to test consume configMaps Feb 2 15:16:59.170: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cec8b4e8-c67f-4ca2-84d0-1a770c98ae3b" in namespace "projected-2368" to be "Succeeded or Failed" Feb 2 15:16:59.176: INFO: Pod "pod-projected-configmaps-cec8b4e8-c67f-4ca2-84d0-1a770c98ae3b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.483903ms Feb 2 15:17:01.183: INFO: Pod "pod-projected-configmaps-cec8b4e8-c67f-4ca2-84d0-1a770c98ae3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013103453s Feb 2 15:17:03.189: INFO: Pod "pod-projected-configmaps-cec8b4e8-c67f-4ca2-84d0-1a770c98ae3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018850948s �[1mSTEP�[0m: Saw pod success Feb 2 15:17:03.189: INFO: Pod "pod-projected-configmaps-cec8b4e8-c67f-4ca2-84d0-1a770c98ae3b" satisfied condition "Succeeded or Failed" Feb 2 15:17:03.195: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod pod-projected-configmaps-cec8b4e8-c67f-4ca2-84d0-1a770c98ae3b container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:17:03.230: INFO: Waiting for pod pod-projected-configmaps-cec8b4e8-c67f-4ca2-84d0-1a770c98ae3b to disappear Feb 2 15:17:03.235: INFO: Pod pod-projected-configmaps-cec8b4e8-c67f-4ca2-84d0-1a770c98ae3b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:03.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2368" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":788,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:00.672: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:17:00.802: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"75b655b8-d909-495f-8215-eef9db447aa8", Controller:(*bool)(0xc003f676be), BlockOwnerDeletion:(*bool)(0xc003f676bf)}} Feb 2 15:17:00.821: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"17f9461f-db35-4e3a-804a-eec4bd058455", Controller:(*bool)(0xc000624826), BlockOwnerDeletion:(*bool)(0xc000624827)}} Feb 2 15:17:00.837: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"61334c04-fc43-4d83-bcce-b615ae07e350", Controller:(*bool)(0xc003f6791e), BlockOwnerDeletion:(*bool)(0xc003f6791f)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:05.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-5259" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":64,"skipped":1391,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:03.277: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps with a certain label �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: changing the label value of the configmap �[1mSTEP�[0m: Expecting to observe a delete notification for the watched object Feb 2 15:17:03.346: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2115 c8054272-5a7a-4ae4-a066-41116ecf1f52 11501 0 2023-02-02 15:17:03 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-02-02 15:17:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 15:17:03.346: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2115 c8054272-5a7a-4ae4-a066-41116ecf1f52 11503 0 2023-02-02 15:17:03 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-02-02 15:17:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 15:17:03.346: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2115 c8054272-5a7a-4ae4-a066-41116ecf1f52 11504 0 2023-02-02 15:17:03 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-02-02 15:17:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: Expecting not to observe a notification because the object no longer meets the selector's requirements �[1mSTEP�[0m: changing the label value of the configmap back �[1mSTEP�[0m: modifying the configmap a third time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe an add notification for the watched object when the label value was restored Feb 2 15:17:13.394: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2115 c8054272-5a7a-4ae4-a066-41116ecf1f52 11618 0 2023-02-02 15:17:03 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-02-02 15:17:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 15:17:13.395: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2115 c8054272-5a7a-4ae4-a066-41116ecf1f52 11619 0 2023-02-02 15:17:03 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-02-02 15:17:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 15:17:13.395: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2115 c8054272-5a7a-4ae4-a066-41116ecf1f52 11620 0 2023-02-02 15:17:03 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-02-02 15:17:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:13.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-2115" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":39,"skipped":793,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:13.424: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:13.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-4171" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":40,"skipped":796,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:05.925: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: Orphaning one of the Job's Pods Feb 2 15:17:10.517: INFO: Successfully updated pod "adopt-release-md7wl" �[1mSTEP�[0m: Checking that the Job readopts the Pod Feb 2 15:17:10.517: INFO: Waiting up to 15m0s for pod "adopt-release-md7wl" in namespace "job-1888" to be "adopted" Feb 2 15:17:10.526: INFO: Pod "adopt-release-md7wl": Phase="Running", Reason="", readiness=true. Elapsed: 9.330489ms Feb 2 15:17:12.536: INFO: Pod "adopt-release-md7wl": Phase="Running", Reason="", readiness=true. Elapsed: 2.018765285s Feb 2 15:17:12.536: INFO: Pod "adopt-release-md7wl" satisfied condition "adopted" �[1mSTEP�[0m: Removing the labels from the Job's Pod Feb 2 15:17:13.057: INFO: Successfully updated pod "adopt-release-md7wl" �[1mSTEP�[0m: Checking that the Job releases the Pod Feb 2 15:17:13.057: INFO: Waiting up to 15m0s for pod "adopt-release-md7wl" in namespace "job-1888" to be "released" Feb 2 15:17:13.064: INFO: Pod "adopt-release-md7wl": Phase="Running", Reason="", readiness=true. Elapsed: 6.341093ms Feb 2 15:17:15.070: INFO: Pod "adopt-release-md7wl": Phase="Running", Reason="", readiness=true. Elapsed: 2.012890855s Feb 2 15:17:15.070: INFO: Pod "adopt-release-md7wl" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:15.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-1888" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":65,"skipped":1408,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:13.540: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:17:13.599: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 2 15:17:18.608: INFO: Pod name cleanup-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Feb 2 15:17:18.608: INFO: Creating deployment test-cleanup-deployment �[1mSTEP�[0m: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Feb 2 15:17:18.648: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2570 30b290e6-704b-47a6-83ba-a3c14b76d1ae 11674 1 2023-02-02 15:17:18 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-02-02 15:17:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004a098b8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Feb 2 15:17:18.664: INFO: New ReplicaSet "test-cleanup-deployment-5dbdbf94dc" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5dbdbf94dc deployment-2570 dd23b1df-0e1c-4cb5-b983-77f7a8844650 11676 1 2023-02-02 15:17:18 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5dbdbf94dc] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 30b290e6-704b-47a6-83ba-a3c14b76d1ae 0xc004a09d27 0xc004a09d28}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:17:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30b290e6-704b-47a6-83ba-a3c14b76d1ae\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5dbdbf94dc,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5dbdbf94dc] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004a09db8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:17:18.664: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 2 15:17:18.664: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2570 772fb15f-bc22-4fd6-89bb-44eab1484784 11675 1 2023-02-02 15:17:13 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 30b290e6-704b-47a6-83ba-a3c14b76d1ae 0xc004a09bf7 0xc004a09bf8}] [] [{e2e.test Update apps/v1 2023-02-02 15:17:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:17:14 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-02-02 15:17:18 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"30b290e6-704b-47a6-83ba-a3c14b76d1ae\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004a09cb8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:17:18.698: INFO: Pod "test-cleanup-controller-bh8rw" is available: &Pod{ObjectMeta:{test-cleanup-controller-bh8rw test-cleanup-controller- deployment-2570 da415b78-898c-429c-b8e0-5d03c7fad14f 11650 0 2023-02-02 15:17:13 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 772fb15f-bc22-4fd6-89bb-44eab1484784 0xc003581d37 0xc003581d38}] [] [{kube-controller-manager Update v1 2023-02-02 15:17:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"772fb15f-bc22-4fd6-89bb-44eab1484784\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:17:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.58\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bn4wn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bn4wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-worker-cnnqas,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:17:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:17:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:17:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:17:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.58,StartTime:2023-02-02 15:17:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-02-02 15:17:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://bc086768c36545d1143e05be2c50b6f0272e5b8d57c5b98fff63bf6e1537f3fe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:17:18.699: INFO: Pod "test-cleanup-deployment-5dbdbf94dc-wmb6z" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5dbdbf94dc-wmb6z test-cleanup-deployment-5dbdbf94dc- deployment-2570 52aa97ae-c324-4482-947e-48438bba92b9 11678 0 2023-02-02 15:17:18 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5dbdbf94dc] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5dbdbf94dc dd23b1df-0e1c-4cb5-b983-77f7a8844650 0xc003581f17 0xc003581f18}] [] [{kube-controller-manager Update v1 2023-02-02 15:17:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd23b1df-0e1c-4cb5-b983-77f7a8844650\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ckzkx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ckzkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:18.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-2570" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":41,"skipped":810,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:15.160: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Feb 2 15:17:15.205: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2c3723f-b076-4be3-9051-c9e8c1e0f256" in namespace "downward-api-4730" to be "Succeeded or Failed" Feb 2 15:17:15.211: INFO: Pod "downwardapi-volume-f2c3723f-b076-4be3-9051-c9e8c1e0f256": Phase="Pending", Reason="", readiness=false. Elapsed: 5.520401ms Feb 2 15:17:17.217: INFO: Pod "downwardapi-volume-f2c3723f-b076-4be3-9051-c9e8c1e0f256": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011836888s Feb 2 15:17:19.225: INFO: Pod "downwardapi-volume-f2c3723f-b076-4be3-9051-c9e8c1e0f256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019323336s �[1mSTEP�[0m: Saw pod success Feb 2 15:17:19.225: INFO: Pod "downwardapi-volume-f2c3723f-b076-4be3-9051-c9e8c1e0f256" satisfied condition "Succeeded or Failed" Feb 2 15:17:19.230: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod downwardapi-volume-f2c3723f-b076-4be3-9051-c9e8c1e0f256 container client-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:17:19.262: INFO: Waiting for pod downwardapi-volume-f2c3723f-b076-4be3-9051-c9e8c1e0f256 to disappear Feb 2 15:17:19.269: INFO: Pod downwardapi-volume-f2c3723f-b076-4be3-9051-c9e8c1e0f256 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:19.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4730" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1435,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:18.816: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-ae4a9edf-e5e1-432c-bdf9-3e9a031f3077 �[1mSTEP�[0m: Creating a pod to test consume secrets Feb 2 15:17:18.917: INFO: Waiting up to 5m0s for pod "pod-secrets-086e02ec-0b94-49bb-8a42-b9dabb51c7a5" in namespace "secrets-982" to be "Succeeded or Failed" Feb 2 15:17:18.938: INFO: Pod "pod-secrets-086e02ec-0b94-49bb-8a42-b9dabb51c7a5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.5917ms Feb 2 15:17:20.950: INFO: Pod "pod-secrets-086e02ec-0b94-49bb-8a42-b9dabb51c7a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033283771s Feb 2 15:17:22.960: INFO: Pod "pod-secrets-086e02ec-0b94-49bb-8a42-b9dabb51c7a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042777558s �[1mSTEP�[0m: Saw pod success Feb 2 15:17:22.960: INFO: Pod "pod-secrets-086e02ec-0b94-49bb-8a42-b9dabb51c7a5" satisfied condition "Succeeded or Failed" Feb 2 15:17:22.966: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod pod-secrets-086e02ec-0b94-49bb-8a42-b9dabb51c7a5 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:17:23.000: INFO: Waiting for pod pod-secrets-086e02ec-0b94-49bb-8a42-b9dabb51c7a5 to disappear Feb 2 15:17:23.006: INFO: Pod pod-secrets-086e02ec-0b94-49bb-8a42-b9dabb51c7a5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:23.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-982" for this suite. �[1mSTEP�[0m: Destroying namespace "secret-namespace-9014" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":836,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:19.352: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Feb 2 15:17:19.444: INFO: Waiting up to 5m0s for pod "pod-d7d3b91d-bba2-4217-97d4-3dce6188a863" in namespace "emptydir-6770" to be "Succeeded or Failed" Feb 2 15:17:19.451: INFO: Pod "pod-d7d3b91d-bba2-4217-97d4-3dce6188a863": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397093ms Feb 2 15:17:21.459: INFO: Pod "pod-d7d3b91d-bba2-4217-97d4-3dce6188a863": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015185787s Feb 2 15:17:23.466: INFO: Pod "pod-d7d3b91d-bba2-4217-97d4-3dce6188a863": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021893081s Feb 2 15:17:25.475: INFO: Pod "pod-d7d3b91d-bba2-4217-97d4-3dce6188a863": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030476481s �[1mSTEP�[0m: Saw pod success Feb 2 15:17:25.475: INFO: Pod "pod-d7d3b91d-bba2-4217-97d4-3dce6188a863" satisfied condition "Succeeded or Failed" Feb 2 15:17:25.482: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod pod-d7d3b91d-bba2-4217-97d4-3dce6188a863 container test-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:17:25.526: INFO: Waiting for pod pod-d7d3b91d-bba2-4217-97d4-3dce6188a863 to disappear Feb 2 15:17:25.535: INFO: Pod pod-d7d3b91d-bba2-4217-97d4-3dce6188a863 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:25.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-6770" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1445,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:25.565: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:17:25.611: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Feb 2 15:17:26.692: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:26.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-2752" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":68,"skipped":1446,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:23.082: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Feb 2 15:17:23.143: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30132f6f-13a1-4bf2-b6d0-1a73a7f7ca58" in namespace "projected-9896" to be "Succeeded or Failed" Feb 2 15:17:23.147: INFO: Pod "downwardapi-volume-30132f6f-13a1-4bf2-b6d0-1a73a7f7ca58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407783ms Feb 2 15:17:25.155: INFO: Pod "downwardapi-volume-30132f6f-13a1-4bf2-b6d0-1a73a7f7ca58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012415887s Feb 2 15:17:27.165: INFO: Pod "downwardapi-volume-30132f6f-13a1-4bf2-b6d0-1a73a7f7ca58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022841638s �[1mSTEP�[0m: Saw pod success Feb 2 15:17:27.166: INFO: Pod "downwardapi-volume-30132f6f-13a1-4bf2-b6d0-1a73a7f7ca58" satisfied condition "Succeeded or Failed" Feb 2 15:17:27.170: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod downwardapi-volume-30132f6f-13a1-4bf2-b6d0-1a73a7f7ca58 container client-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:17:27.196: INFO: Waiting for pod downwardapi-volume-30132f6f-13a1-4bf2-b6d0-1a73a7f7ca58 to disappear Feb 2 15:17:27.200: INFO: Pod downwardapi-volume-30132f6f-13a1-4bf2-b6d0-1a73a7f7ca58 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:27.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9896" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":849,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:27.005: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:17:28.016: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:17:31.068: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Creating a dummy validating-webhook-configuration object �[1mSTEP�[0m: Deleting the validating-webhook-configuration, which should be possible to remove �[1mSTEP�[0m: Creating a dummy mutating-webhook-configuration object �[1mSTEP�[0m: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:31.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1373" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1373-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":69,"skipped":1511,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:27.244: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-340c6609-ecdf-4f00-8380-3dbea4c4ffd5 �[1mSTEP�[0m: Creating a pod to test consume secrets Feb 2 15:17:27.308: INFO: Waiting up to 5m0s for pod "pod-secrets-be7f70f2-b1ad-4d35-bb09-6f939148b465" in namespace "secrets-9932" to be "Succeeded or Failed" Feb 2 15:17:27.321: INFO: Pod "pod-secrets-be7f70f2-b1ad-4d35-bb09-6f939148b465": Phase="Pending", Reason="", readiness=false. Elapsed: 12.378634ms Feb 2 15:17:29.328: INFO: Pod "pod-secrets-be7f70f2-b1ad-4d35-bb09-6f939148b465": Phase="Running", Reason="", readiness=true. Elapsed: 2.01921035s Feb 2 15:17:31.344: INFO: Pod "pod-secrets-be7f70f2-b1ad-4d35-bb09-6f939148b465": Phase="Running", Reason="", readiness=false. Elapsed: 4.035824177s Feb 2 15:17:33.352: INFO: Pod "pod-secrets-be7f70f2-b1ad-4d35-bb09-6f939148b465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04381454s �[1mSTEP�[0m: Saw pod success Feb 2 15:17:33.353: INFO: Pod "pod-secrets-be7f70f2-b1ad-4d35-bb09-6f939148b465" satisfied condition "Succeeded or Failed" Feb 2 15:17:33.360: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod pod-secrets-be7f70f2-b1ad-4d35-bb09-6f939148b465 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:17:33.389: INFO: Waiting for pod pod-secrets-be7f70f2-b1ad-4d35-bb09-6f939148b465 to disappear Feb 2 15:17:33.396: INFO: Pod pod-secrets-be7f70f2-b1ad-4d35-bb09-6f939148b465 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:33.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9932" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":854,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:31.523: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Feb 2 15:17:31.611: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:17:33.618: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Feb 2 15:17:33.645: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:17:35.653: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Feb 2 15:17:35.674: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 2 15:17:35.681: INFO: Pod pod-with-prestop-http-hook still exists Feb 2 15:17:37.682: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 2 15:17:37.688: INFO: Pod pod-with-prestop-http-hook still exists Feb 2 15:17:39.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 2 15:17:39.690: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:39.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-6671" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1537,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:39.765: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test service account token: Feb 2 15:17:39.831: INFO: Waiting up to 5m0s for pod "test-pod-4d096fee-4ff0-4196-aeac-be8090ed0cce" in namespace "svcaccounts-3234" to be "Succeeded or Failed" Feb 2 15:17:39.837: INFO: Pod "test-pod-4d096fee-4ff0-4196-aeac-be8090ed0cce": Phase="Pending", Reason="", readiness=false. Elapsed: 5.938ms Feb 2 15:17:41.845: INFO: Pod "test-pod-4d096fee-4ff0-4196-aeac-be8090ed0cce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014180524s Feb 2 15:17:43.853: INFO: Pod "test-pod-4d096fee-4ff0-4196-aeac-be8090ed0cce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022082329s �[1mSTEP�[0m: Saw pod success Feb 2 15:17:43.853: INFO: Pod "test-pod-4d096fee-4ff0-4196-aeac-be8090ed0cce" satisfied condition "Succeeded or Failed" Feb 2 15:17:43.857: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm pod test-pod-4d096fee-4ff0-4196-aeac-be8090ed0cce container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:17:43.882: INFO: Waiting for pod test-pod-4d096fee-4ff0-4196-aeac-be8090ed0cce to disappear Feb 2 15:17:43.887: INFO: Pod test-pod-4d096fee-4ff0-4196-aeac-be8090ed0cce no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:17:43.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-3234" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":71,"skipped":1547,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":6,"skipped":233,"failed":2,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:13:37.793: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6743.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6743.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6743.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 58.18.143.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.143.18.58_udp@PTR;check="$$(dig +tcp +noall +answer +search 58.18.143.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.143.18.58_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6743.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6743.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6743.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 58.18.143.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.143.18.58_udp@PTR;check="$$(dig +tcp +noall +answer +search 58.18.143.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.143.18.58_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Feb 2 15:17:16.853: INFO: Unable to read wheezy_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-cd26376a-8aa0-45dd-8042-00b1f59c9378: the server is currently unable to handle the request (get pods dns-test-cd26376a-8aa0-45dd-8042-00b1f59c9378) Feb 2 15:18:42.345: FAIL: Unable to read wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-cd26376a-8aa0-45dd-8042-00b1f59c9378: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-6743/pods/dns-test-cd26376a-8aa0-45dd-8042-00b1f59c9378/proxy/results/wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc000101c00}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79062a8?, 0xc000130000?}, 0xc0043cf9f8?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79062a8, 0xc000130000}, 0x38?, 0x2d15545?, 0x60?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79062a8, 0xc000130000}, 0x4a?, 0xc0043cfa88?, 0x2467887?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78ceda0?, 0xc000174800?, 0xc0043cfad0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003098600, 0x10, 0x18}, {0x705047b, 0x7}, 0xc0038c4000, {0x7938928?, 0xc002e3a780}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0008f34a0, 0xc0038c4000, {0xc003098600, 0x10, 0x18}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x452 k8s.io/kubernetes/test/e2e/network.glob..func2.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc35 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008de1a0, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f E0202 15:18:42.345788 17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Feb 2 15:18:42.345: Unable to read wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-cd26376a-8aa0-45dd-8042-00b1f59c9378: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-6743/pods/dns-test-cd26376a-8aa0-45dd-8042-00b1f59c9378/proxy/results/wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:222, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc000101c00})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79062a8?, 0xc000130000?}, 0xc0043cf9f8?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79062a8, 0xc000130000}, 0x38?, 0x2d15545?, 0x60?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79062a8, 0xc000130000}, 0x4a?, 0xc0043cfa88?, 0x2467887?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78ceda0?, 0xc000174800?, 0xc0043cfad0?)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003098600, 0x10, 0x18}, {0x705047b, 0x7}, 0xc0038c4000, {0x7938928?, 0xc002e3a780}, 0x0, {0x0, ...})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0008f34a0, 0xc0038c4000, {0xc003098600, 0x10, 0x18})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x452\nk8s.io/kubernetes/test/e2e/network.glob..func2.5()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc35\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7\nk8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc0008de1a0, 0x72ecb90)\n\t/usr/local/go/src/testing/testing.go:1446 +0x10b\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1493 +0x35f"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 136 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6bb1ac0?, 0xc003ee4200}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x86 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0001182a0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6bb1ac0, 0xc003ee4200}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x7d panic({0x623d460, 0x78c75a0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc000559c80, 0x167}, {0xc0043cf4d0?, 0xc0043cf4e0?, 0x0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000559c80, 0x167}, {0xc0043cf5b0?, 0x7047513?, 0xc0043cf5d8?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x197 k8s.io/kubernetes/test/e2e/framework.Failf({0x70f9eb9?, 0x2d?}, {0xc0043cf800?, 0x0?, 0x0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x12c k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x845 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc000101c00}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79062a8?, 0xc000130000?}, 0xc0043cf9f8?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79062a8, 0xc000130000}, 0x38?, 0x2d15545?, 0x60?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79062a8, 0xc000130000}, 0x4a?, 0xc0043cfa88?, 0x2467887?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78ceda0?, 0xc000174800?, 0xc0043cfad0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003098600, 0x10, 0x18}, {0x705047b, 0x7}, 0xc0038c4000, {0x7938928?, 0xc002e3a780}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0008f34a0, 0xc0038c4000, {0xc003098600, 0x10, 0x18}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x452 k8s.io/kubernetes/test/e2e/network.glob..func2.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc35 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0043d1310?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb1 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0043d15c0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003a4ae10, 0xc0043d1988?, {0x78ceda0, 0xc000174800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003a4ae10, {0x78ceda0, 0xc000174800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0038c2000, 0xc003a4ae10) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xf1 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0038c2000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1b6 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0038c2000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000198070, {0x7faa5412a700, 0xc0008de1a0}, {0x7087b0a, 0x14}, {0xc000767170, 0x3, 0x3}, {0x790a160, 0xc000174800}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4e5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x78d5740?, 0xc0008de1a0}, {0x7087b0a, 0x14}, {0xc00051dc80, 0x3, 0x6?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x189 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x78d5740, 0xc0008de1a0}, {0x7087b0a, 0x14}, {0xc0009d9e20, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0x10a k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008de1a0, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:18:42.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-6743" for this suite. �[91m�[1m• Failure [304.701 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould provide DNS for services [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mFeb 2 15:18:42.345: Unable to read wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-cd26376a-8aa0-45dd-8042-00b1f59c9378: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-6743/pods/dns-test-cd26376a-8aa0-45dd-8042-00b1f59c9378/proxy/results/wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":6,"skipped":233,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:18:42.504: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:18:42.565: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:18:45.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-4958" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":7,"skipped":234,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:18:45.846: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename hostport �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Feb 2 15:18:45.912: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:18:47.917: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.5 on the node which pod1 resides and expect scheduled Feb 2 15:18:47.927: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:18:49.932: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.5 but use UDP protocol on the node which pod2 resides Feb 2 15:18:49.941: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:18:51.946: INFO: The status of Pod pod3 is Running (Ready = false) Feb 2 15:18:53.947: INFO: The status of Pod pod3 is Running (Ready = true) Feb 2 15:18:53.956: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Feb 2 15:18:55.962: INFO: The status of Pod e2e-host-exec is Running (Ready = true) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Feb 2 15:18:55.965: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.5 http://127.0.0.1:54323/hostname] Namespace:hostport-6894 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:18:55.965: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:18:55.966: INFO: ExecWithOptions: Clientset creation Feb 2 15:18:55.966: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-6894/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.18.0.5+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.5, port: 54323 Feb 2 15:18:56.105: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.5:54323/hostname] Namespace:hostport-6894 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:18:56.105: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:18:56.105: INFO: ExecWithOptions: Clientset creation Feb 2 15:18:56.105: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-6894/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F172.18.0.5%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.5, port: 54323 UDP Feb 2 15:18:56.197: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.5 54323] Namespace:hostport-6894 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 15:18:56.197: INFO: >>> kubeConfig: /tmp/kubeconfig Feb 2 15:18:56.198: INFO: ExecWithOptions: Clientset creation Feb 2 15:18:56.198: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-6894/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+172.18.0.5+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:19:01.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "hostport-6894" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":274,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:19:01.320: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap that has name configmap-test-emptyKey-8ab1386a-fcac-4dbb-aa9d-d0495d51810a [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:19:01.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5258" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":9,"skipped":282,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:19:01.393: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:19:01.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1381" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":10,"skipped":302,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:33.691: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: Ensuring more than one job is running at a time �[1mSTEP�[0m: Ensuring at least two running jobs exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:19:01.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-2191" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":45,"skipped":949,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:19:01.452: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-80d9f61f-ef37-45b1-b697-b94e5ae8da4e �[1mSTEP�[0m: Creating a pod to test consume configMaps Feb 2 15:19:01.488: INFO: Waiting up to 5m0s for pod "pod-configmaps-aae5a3f3-b0c7-43ed-8bce-7594ee3244c6" in namespace "configmap-1549" to be "Succeeded or Failed" Feb 2 15:19:01.492: INFO: Pod "pod-configmaps-aae5a3f3-b0c7-43ed-8bce-7594ee3244c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.631964ms Feb 2 15:19:03.498: INFO: Pod "pod-configmaps-aae5a3f3-b0c7-43ed-8bce-7594ee3244c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009800666s Feb 2 15:19:05.503: INFO: Pod "pod-configmaps-aae5a3f3-b0c7-43ed-8bce-7594ee3244c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01476179s �[1mSTEP�[0m: Saw pod success Feb 2 15:19:05.503: INFO: Pod "pod-configmaps-aae5a3f3-b0c7-43ed-8bce-7594ee3244c6" satisfied condition "Succeeded or Failed" Feb 2 15:19:05.506: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod pod-configmaps-aae5a3f3-b0c7-43ed-8bce-7594ee3244c6 container configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:19:05.528: INFO: Waiting for pod pod-configmaps-aae5a3f3-b0c7-43ed-8bce-7594ee3244c6 to disappear Feb 2 15:19:05.532: INFO: Pod pod-configmaps-aae5a3f3-b0c7-43ed-8bce-7594ee3244c6 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:19:05.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1549" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":313,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:19:01.790: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-75d278f9-2316-4030-8751-89d6df793698 �[1mSTEP�[0m: Creating a pod to test consume secrets Feb 2 15:19:01.840: INFO: Waiting up to 5m0s for pod "pod-secrets-9321d1fd-b683-4354-b807-be27d95bc54a" in namespace "secrets-886" to be "Succeeded or Failed" Feb 2 15:19:01.844: INFO: Pod "pod-secrets-9321d1fd-b683-4354-b807-be27d95bc54a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.348336ms Feb 2 15:19:03.849: INFO: Pod "pod-secrets-9321d1fd-b683-4354-b807-be27d95bc54a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009202194s Feb 2 15:19:05.855: INFO: Pod "pod-secrets-9321d1fd-b683-4354-b807-be27d95bc54a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015062396s �[1mSTEP�[0m: Saw pod success Feb 2 15:19:05.855: INFO: Pod "pod-secrets-9321d1fd-b683-4354-b807-be27d95bc54a" satisfied condition "Succeeded or Failed" Feb 2 15:19:05.858: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod pod-secrets-9321d1fd-b683-4354-b807-be27d95bc54a container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:19:05.875: INFO: Waiting for pod pod-secrets-9321d1fd-b683-4354-b807-be27d95bc54a to disappear Feb 2 15:19:05.879: INFO: Pod pod-secrets-9321d1fd-b683-4354-b807-be27d95bc54a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:19:05.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-886" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":961,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:19:05.901: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Feb 2 15:19:05.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9569 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Feb 2 15:19:06.049: INFO: stderr: "" Feb 2 15:19:06.049: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: replace the image in the pod with server-side dry-run Feb 2 15:19:06.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9569 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' Feb 2 15:19:07.572: INFO: stderr: "" Feb 2 15:19:07.572: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Feb 2 15:19:07.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9569 delete pods e2e-test-httpd-pod' Feb 2 15:19:10.097: INFO: stderr: "" Feb 2 15:19:10.097: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:19:10.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9569" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":47,"skipped":964,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:19:10.112: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Feb 2 15:19:10.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c6b4850-e390-4cd9-bae0-6f054de64dee" in namespace "downward-api-8055" to be "Succeeded or Failed" Feb 2 15:19:10.184: INFO: Pod "downwardapi-volume-9c6b4850-e390-4cd9-bae0-6f054de64dee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053329ms Feb 2 15:19:12.188: INFO: Pod "downwardapi-volume-9c6b4850-e390-4cd9-bae0-6f054de64dee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009895272s Feb 2 15:19:14.193: INFO: Pod "downwardapi-volume-9c6b4850-e390-4cd9-bae0-6f054de64dee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015176315s �[1mSTEP�[0m: Saw pod success Feb 2 15:19:14.193: INFO: Pod "downwardapi-volume-9c6b4850-e390-4cd9-bae0-6f054de64dee" satisfied condition "Succeeded or Failed" Feb 2 15:19:14.196: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod downwardapi-volume-9c6b4850-e390-4cd9-bae0-6f054de64dee container client-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:19:14.211: INFO: Waiting for pod downwardapi-volume-9c6b4850-e390-4cd9-bae0-6f054de64dee to disappear Feb 2 15:19:14.214: INFO: Pod downwardapi-volume-9c6b4850-e390-4cd9-bae0-6f054de64dee no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:19:14.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8055" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":964,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:19:05.616: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:19:06.089: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 15:19:08.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.February, 2, 15, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:19:11.150: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:19:11.154: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API Feb 2 15:19:21.684: INFO: Waiting for webhook configuration to be ready... Feb 2 15:19:31.796: INFO: Waiting for webhook configuration to be ready... Feb 2 15:19:41.898: INFO: Waiting for webhook configuration to be ready... Feb 2 15:19:51.997: INFO: Waiting for webhook configuration to be ready... Feb 2 15:20:02.009: INFO: Waiting for webhook configuration to be ready... Feb 2 15:20:02.010: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002ac220>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForCustomResource(0xc00021f600, {0xc002e18160, 0xb}, 0xc0027c6d20, 0xc00230ad40, 0x0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 +0x845 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:224 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008de1a0, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:20:02.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-585" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-585-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [57.009 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny custom resource creation, update and deletion [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mFeb 2 15:20:02.010: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002ac220>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":11,"skipped":352,"failed":4,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:20:02.627: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:20:03.324: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:20:06.350: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:20:06.355: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API Feb 2 15:20:16.885: INFO: Waiting for webhook configuration to be ready... Feb 2 15:20:26.999: INFO: Waiting for webhook configuration to be ready... Feb 2 15:20:37.099: INFO: Waiting for webhook configuration to be ready... Feb 2 15:20:47.205: INFO: Waiting for webhook configuration to be ready... Feb 2 15:20:57.225: INFO: Waiting for webhook configuration to be ready... Feb 2 15:20:57.226: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002ac220>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForCustomResource(0xc00021f600, {0xc0007ff300, 0xc}, 0xc004aa0780, 0xc00253a320, 0x0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 +0x845 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:224 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008de1a0, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:20:57.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8636" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8636-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [55.256 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny custom resource creation, update and deletion [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mFeb 2 15:20:57.227: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002ac220>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:17:43.946: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod liveness-54e8c79d-5f09-4b19-9a96-17fe50b792b5 in namespace container-probe-1950 Feb 2 15:17:46.003: INFO: Started pod liveness-54e8c79d-5f09-4b19-9a96-17fe50b792b5 in namespace container-probe-1950 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Feb 2 15:17:46.007: INFO: Initial restart count of pod liveness-54e8c79d-5f09-4b19-9a96-17fe50b792b5 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:21:46.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-1950" for this suite. �[32m• [SLOW TEST:242.792 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":72,"skipped":1562,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:21:46.756: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Feb 2 15:21:46.796: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b2518fe-27d3-44e8-bad5-998e2bf29e89" in namespace "downward-api-9668" to be "Succeeded or Failed" Feb 2 15:21:46.800: INFO: Pod "downwardapi-volume-4b2518fe-27d3-44e8-bad5-998e2bf29e89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214439ms Feb 2 15:21:48.806: INFO: Pod "downwardapi-volume-4b2518fe-27d3-44e8-bad5-998e2bf29e89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01038829s Feb 2 15:21:50.811: INFO: Pod "downwardapi-volume-4b2518fe-27d3-44e8-bad5-998e2bf29e89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014960947s �[1mSTEP�[0m: Saw pod success Feb 2 15:21:50.811: INFO: Pod "downwardapi-volume-4b2518fe-27d3-44e8-bad5-998e2bf29e89" satisfied condition "Succeeded or Failed" Feb 2 15:21:50.814: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod downwardapi-volume-4b2518fe-27d3-44e8-bad5-998e2bf29e89 container client-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:21:50.839: INFO: Waiting for pod downwardapi-volume-4b2518fe-27d3-44e8-bad5-998e2bf29e89 to disappear Feb 2 15:21:50.843: INFO: Pod downwardapi-volume-4b2518fe-27d3-44e8-bad5-998e2bf29e89 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:21:50.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9668" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":73,"skipped":1567,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:21:50.907: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Feb 2 15:21:50.955: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Feb 2 15:21:50.960: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Feb 2 15:21:50.980: INFO: waiting for watch events with expected annotations Feb 2 15:21:50.981: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:21:51.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-5379" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":74,"skipped":1596,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":11,"skipped":352,"failed":5,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:20:57.888: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Feb 2 15:20:58.668: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Feb 2 15:21:01.711: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:21:01.718: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API Feb 2 15:21:12.257: INFO: Waiting for webhook configuration to be ready... Feb 2 15:21:22.370: INFO: Waiting for webhook configuration to be ready... Feb 2 15:21:32.477: INFO: Waiting for webhook configuration to be ready... Feb 2 15:21:42.572: INFO: Waiting for webhook configuration to be ready... Feb 2 15:21:52.585: INFO: Waiting for webhook configuration to be ready... Feb 2 15:21:52.586: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002ac220>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForCustomResource(0xc00021f600, {0xc002255d60, 0xc}, 0xc0020c0d70, 0xc0033dd3a0, 0x0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 +0x845 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:224 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008de1a0, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:21:53.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4100" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4100-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [55.296 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny custom resource creation, update and deletion [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mFeb 2 15:21:52.586: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002ac220>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":11,"skipped":352,"failed":6,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:21:51.053: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-84d1c482-cb71-4acd-9d17-090103986251 �[1mSTEP�[0m: Creating a pod to test consume configMaps Feb 2 15:21:51.098: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-acfaa494-0fa0-4854-9e5c-d75ff57f2dc9" in namespace "projected-5483" to be "Succeeded or Failed" Feb 2 15:21:51.102: INFO: Pod "pod-projected-configmaps-acfaa494-0fa0-4854-9e5c-d75ff57f2dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.688921ms Feb 2 15:21:53.108: INFO: Pod "pod-projected-configmaps-acfaa494-0fa0-4854-9e5c-d75ff57f2dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009520196s Feb 2 15:21:55.113: INFO: Pod "pod-projected-configmaps-acfaa494-0fa0-4854-9e5c-d75ff57f2dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015183986s �[1mSTEP�[0m: Saw pod success Feb 2 15:21:55.113: INFO: Pod "pod-projected-configmaps-acfaa494-0fa0-4854-9e5c-d75ff57f2dc9" satisfied condition "Succeeded or Failed" Feb 2 15:21:55.117: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6 pod pod-projected-configmaps-acfaa494-0fa0-4854-9e5c-d75ff57f2dc9 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:21:55.140: INFO: Waiting for pod pod-projected-configmaps-acfaa494-0fa0-4854-9e5c-d75ff57f2dc9 to disappear Feb 2 15:21:55.144: INFO: Pod pod-projected-configmaps-acfaa494-0fa0-4854-9e5c-d75ff57f2dc9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:21:55.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5483" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":75,"skipped":1603,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:21:53.315: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override command Feb 2 15:21:53.363: INFO: Waiting up to 5m0s for pod "client-containers-f95dadd4-df0d-428d-8445-b6bb30cd70d7" in namespace "containers-8689" to be "Succeeded or Failed" Feb 2 15:21:53.368: INFO: Pod "client-containers-f95dadd4-df0d-428d-8445-b6bb30cd70d7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.484825ms Feb 2 15:21:55.374: INFO: Pod "client-containers-f95dadd4-df0d-428d-8445-b6bb30cd70d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011177977s Feb 2 15:21:57.379: INFO: Pod "client-containers-f95dadd4-df0d-428d-8445-b6bb30cd70d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01646547s �[1mSTEP�[0m: Saw pod success Feb 2 15:21:57.380: INFO: Pod "client-containers-f95dadd4-df0d-428d-8445-b6bb30cd70d7" satisfied condition "Succeeded or Failed" Feb 2 15:21:57.385: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod client-containers-f95dadd4-df0d-428d-8445-b6bb30cd70d7 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:21:57.411: INFO: Waiting for pod client-containers-f95dadd4-df0d-428d-8445-b6bb30cd70d7 to disappear Feb 2 15:21:57.414: INFO: Pod client-containers-f95dadd4-df0d-428d-8445-b6bb30cd70d7 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:21:57.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-8689" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":401,"failed":6,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:21:55.160: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:21:55.226: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-5d51e540-2ccb-4dad-bb63-8615d10727ba" in namespace "security-context-test-6216" to be "Succeeded or Failed" Feb 2 15:21:55.231: INFO: Pod "busybox-privileged-false-5d51e540-2ccb-4dad-bb63-8615d10727ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.940813ms Feb 2 15:21:57.236: INFO: Pod "busybox-privileged-false-5d51e540-2ccb-4dad-bb63-8615d10727ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009897563s Feb 2 15:21:59.241: INFO: Pod "busybox-privileged-false-5d51e540-2ccb-4dad-bb63-8615d10727ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015452838s Feb 2 15:21:59.241: INFO: Pod "busybox-privileged-false-5d51e540-2ccb-4dad-bb63-8615d10727ba" satisfied condition "Succeeded or Failed" Feb 2 15:21:59.248: INFO: Got logs for pod "busybox-privileged-false-5d51e540-2ccb-4dad-bb63-8615d10727ba": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:21:59.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-6216" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1605,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:21:57.511: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Feb 2 15:21:57.549: INFO: Waiting up to 5m0s for pod "pod-7b3f7559-9dbb-4846-bd4b-2912a4e550f4" in namespace "emptydir-7712" to be "Succeeded or Failed" Feb 2 15:21:57.553: INFO: Pod "pod-7b3f7559-9dbb-4846-bd4b-2912a4e550f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.757914ms Feb 2 15:21:59.558: INFO: Pod "pod-7b3f7559-9dbb-4846-bd4b-2912a4e550f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008764791s Feb 2 15:22:01.563: INFO: Pod "pod-7b3f7559-9dbb-4846-bd4b-2912a4e550f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013793091s �[1mSTEP�[0m: Saw pod success Feb 2 15:22:01.563: INFO: Pod "pod-7b3f7559-9dbb-4846-bd4b-2912a4e550f4" satisfied condition "Succeeded or Failed" Feb 2 15:22:01.566: INFO: Trying to get logs from node k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9 pod pod-7b3f7559-9dbb-4846-bd4b-2912a4e550f4 container test-container: <nil> �[1mSTEP�[0m: delete the pod Feb 2 15:22:01.588: INFO: Waiting for pod pod-7b3f7559-9dbb-4846-bd4b-2912a4e550f4 to disappear Feb 2 15:22:01.591: INFO: Pod pod-7b3f7559-9dbb-4846-bd4b-2912a4e550f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:22:01.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7712" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":438,"failed":6,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:21:59.290: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Feb 2 15:21:59.314: INFO: >>> kubeConfig: /tmp/kubeconfig [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the sample API server. Feb 2 15:21:59.657: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 2 15:22:01.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.February, 2, 15, 21, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 21, 59, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.February, 2, 15, 21, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.February, 2, 15, 21, 59, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 15:22:03.914: INFO: Waited 124.813775ms for the sample-apiserver to be ready to handle requests. �[1mSTEP�[0m: Read Status for v1alpha1.wardle.example.com �[1mSTEP�[0m: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' �[1mSTEP�[0m: List APIServices Feb 2 15:22:03.997: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:22:04.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-4316" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":77,"skipped":1626,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:22:04.524: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod Feb 2 15:22:04.545: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 2 15:22:09.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-6275" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":78,"skipped":1657,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 15:22:01.633: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Feb 2 15:22:01.668: INFO: Creating deployment "webserver-deployment" Feb 2 15:22:01.674: INFO: Waiting for observed generation 1 Feb 2 15:22:03.684: INFO: Waiting for all required pods to come up Feb 2 15:22:03.689: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Feb 2 15:22:05.701: INFO: Waiting for deployment "webserver-deployment" to complete Feb 2 15:22:05.709: INFO: Updating deployment "webserver-deployment" with a non-existent image Feb 2 15:22:05.720: INFO: Updating deployment webserver-deployment Feb 2 15:22:05.720: INFO: Waiting for observed generation 2 Feb 2 15:22:07.729: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 2 15:22:07.732: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 2 15:22:07.736: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 2 15:22:07.747: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 2 15:22:07.747: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 2 15:22:07.750: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 2 15:22:07.756: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Feb 2 15:22:07.756: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Feb 2 15:22:07.766: INFO: Updating deployment webserver-deployment Feb 2 15:22:07.766: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Feb 2 15:22:07.774: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 2 15:22:07.778: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Feb 2 15:22:09.806: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3209 88cdcf6a-0ab4-4434-9b9c-534a27bb8eac 13872 3 2023-02-02 15:22:01 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-02-02 15:22:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0011ca4e8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:10,UnavailableReplicas:23,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-02-02 15:22:07 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-566f96c878" is progressing.,LastUpdateTime:2023-02-02 15:22:09 +0000 UTC,LastTransitionTime:2023-02-02 15:22:01 +0000 UTC,},},ReadyReplicas:10,CollisionCount:nil,},} Feb 2 15:22:09.823: INFO: New ReplicaSet "webserver-deployment-566f96c878" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-566f96c878 deployment-3209 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 13745 3 2023-02-02 15:22:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 88cdcf6a-0ab4-4434-9b9c-534a27bb8eac 0xc00063a5f7 0xc00063a5f8}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"88cdcf6a-0ab4-4434-9b9c-534a27bb8eac\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:22:05 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 566f96c878,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00063a6b8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:22:09.824: INFO: All old ReplicaSets of Deployment "webserver-deployment": Feb 2 15:22:09.824: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-3209 e65239a8-adf2-4ac3-9539-b9d2f60811d8 13871 3 2023-02-02 15:22:01 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 88cdcf6a-0ab4-4434-9b9c-534a27bb8eac 0xc00063a737 0xc00063a738}] [] [{kube-controller-manager Update apps/v1 2023-02-02 15:22:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"88cdcf6a-0ab4-4434-9b9c-534a27bb8eac\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-02-02 15:22:03 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00063a7d8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:11,AvailableReplicas:11,Conditions:[]ReplicaSetCondition{},},} Feb 2 15:22:09.835: INFO: Pod "webserver-deployment-566f96c878-2tvpm" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-2tvpm webserver-deployment-566f96c878- deployment-3209 fb87f090-bdc9-41d2-b0b1-169c493b80df 13768 0 2023-02-02 15:22:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc003d7ade7 0xc003d7ade8}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zvbwn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zvbwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-worker-cnnqas,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-02-02 15:22:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.837: INFO: Pod "webserver-deployment-566f96c878-8ftrr" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-8ftrr webserver-deployment-566f96c878- deployment-3209 c236810f-e6bb-486f-b63a-259411a6a8fe 13840 0 2023-02-02 15:22:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc003d7afc0 0xc003d7afc1}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4vgxm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4vgxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-worker-cnnqas,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-02-02 15:22:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.838: INFO: Pod "webserver-deployment-566f96c878-8g4hg" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-8g4hg webserver-deployment-566f96c878- deployment-3209 04eaa230-6e80-42e8-a4ec-ef1a096ffee8 13667 0 2023-02-02 15:22:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc003d7b1a0 0xc003d7b1a1}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.111\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dr4ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dr4ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.111,StartTime:2023-02-02 15:22:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.111,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.838: INFO: Pod "webserver-deployment-566f96c878-ccqms" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-ccqms webserver-deployment-566f96c878- deployment-3209 e15f3211-9ce3-4b8d-87fe-aa23336aad2e 13636 0 2023-02-02 15:22:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc003d7b3a0 0xc003d7b3a1}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jkdtk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkdtk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-qtqh6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-02-02 15:22:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.838: INFO: Pod "webserver-deployment-566f96c878-cxrmd" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-cxrmd webserver-deployment-566f96c878- deployment-3209 69621dd2-ccb1-4a6e-8205-159fbdf0fc5c 13869 0 2023-02-02 15:22:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc003d7b570 0xc003d7b571}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p8p9b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p8p9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.73,StartTime:2023-02-02 15:22:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.839: INFO: Pod "webserver-deployment-566f96c878-kkvlz" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-kkvlz webserver-deployment-566f96c878- deployment-3209 96c34a45-6e8f-436f-b800-b69b1683454f 13758 0 2023-02-02 15:22:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc003d7b770 0xc003d7b771}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-smv47,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-smv47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-02-02 15:22:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.839: INFO: Pod "webserver-deployment-566f96c878-rcdmx" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-rcdmx webserver-deployment-566f96c878- deployment-3209 3088a9e6-cdf4-4904-98c1-0d121e9dd39b 13770 0 2023-02-02 15:22:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc003d7b940 0xc003d7b941}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wxhzh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxhzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-02-02 15:22:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.839: INFO: Pod "webserver-deployment-566f96c878-s7bhv" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-s7bhv webserver-deployment-566f96c878- deployment-3209 5dc00f8d-64d7-4aec-b92b-0941d7115bcc 13675 0 2023-02-02 15:22:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc003d7bb10 0xc003d7bb11}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6tzs6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6tzs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.71,StartTime:2023-02-02 15:22:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.840: INFO: Pod "webserver-deployment-566f96c878-sl67t" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-sl67t webserver-deployment-566f96c878- deployment-3209 88b6f327-7b4f-41d7-8b51-52e185cd6aec 13661 0 2023-02-02 15:22:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc003d7bd10 0xc003d7bd11}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9sbwg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sbwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-worker-cnnqas,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.66,StartTime:2023-02-02 15:22:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.840: INFO: Pod "webserver-deployment-566f96c878-slkm6" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-slkm6 webserver-deployment-566f96c878- deployment-3209 bed537b7-2e3f-4b00-9aa1-8cfa1d9fced8 13678 0 2023-02-02 15:22:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc003d7bf20 0xc003d7bf21}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.70\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-79lmf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-79lmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-worker-t1dfk9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.70,StartTime:2023-02-02 15:22:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.841: INFO: Pod "webserver-deployment-566f96c878-tkmvm" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-tkmvm webserver-deployment-566f96c878- deployment-3209 0a69ff36-ea91-4c39-be2e-5a525cb728c9 13753 0 2023-02-02 15:22:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc000f88130 0xc000f88131}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ds44l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ds44l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-md-0-mpnkg-57cf48b87c-jj5sm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-02-02 15:22:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 15:22:09.841: INFO: Pod "webserver-deployment-566f96c878-v4m2z" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-v4m2z webserver-deployment-566f96c878- deployment-3209 490e453f-9a72-4a85-90c7-54f1ac9528bc 13749 0 2023-02-02 15:22:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb 0xc000f88320 0xc000f88321}] [] [{kube-controller-manager Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10bc4c47-c0e3-4c66-bac7-0ec0d9ee42eb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-02-02 15:22:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ksd5m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ksd5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-qt17ut-worker-cnnqas,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-02-02 15:22:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,L