Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 2h1m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc000b2e888>: { error: <*errors.withMessage | 0xc000804400>{ cause: <*errors.errorString | 0xc000a2c8d0>{ s: "error container run failed with exit code 137", }, msg: "Unable to run conformance tests", }, stack: [0x1a97f78, 0x1adc389, 0x7b9691, 0x7b9085, 0x7b875b, 0x7be4c9, 0x7bdeb2, 0x7def91, 0x7decb6, 0x7de305, 0x7e0745, 0x7ec929, 0x7ec73e, 0x1af7c92, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 137 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-i7zwun INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-i7zwun" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-pw1vby" using the "upgrades-cgroupfs" template (Kubernetes v1.22.17, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-pw1vby --infrastructure (default) --kubernetes-version v1.22.17 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-pw1vby-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-pw1vby-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-pw1vby-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-pw1vby-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-pw1vby created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-pw1vby-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-pw1vby-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-i7zwun/k8s-upgrade-and-conformance-pw1vby-8nwgl to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-i7zwun/k8s-upgrade-and-conformance-pw1vby-8nwgl to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.16 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-i7zwun/k8s-upgrade-and-conformance-pw1vby-md-0-f7x96 to be upgraded to v1.23.16 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.23.16 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-i7zwun/k8s-upgrade-and-conformance-pw1vby-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-i7zwun/k8s-upgrade-and-conformance-pw1vby-mp-0 to be upgraded from v1.22.17 to v1.23.16 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.23.16 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "-ginkgo.v=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1675004484�[0m - Will randomize all specs Will run �[1m7052�[0m specs Running in parallel across �[1m4�[0m nodes Jan 29 15:01:30.564: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:01:30.568: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 29 15:01:30.596: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 29 15:01:30.644: INFO: The status of Pod coredns-bd6b6df9f-2x422 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:30.644: INFO: The status of Pod coredns-bd6b6df9f-vmgbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:30.644: INFO: The status of Pod kindnet-7sfgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:30.644: INFO: The status of Pod kindnet-sm7fl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:30.644: INFO: The status of Pod kube-proxy-2r5xw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:30.644: INFO: The status of Pod kube-proxy-jshk5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:30.644: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 29 15:01:30.644: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 29 15:01:30.644: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:01:30.644: INFO: coredns-bd6b6df9f-2x422 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC }] Jan 29 15:01:30.644: INFO: coredns-bd6b6df9f-vmgbx k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:40 +0000 UTC }] Jan 29 15:01:30.644: INFO: kindnet-7sfgk k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:30.644: INFO: kindnet-sm7fl k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:29 +0000 UTC }] Jan 29 15:01:30.644: INFO: kube-proxy-2r5xw k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:30.644: INFO: kube-proxy-jshk5 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC }] Jan 29 15:01:30.645: INFO: Jan 29 15:01:32.687: INFO: The status of Pod coredns-bd6b6df9f-2x422 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:32.687: INFO: The status of Pod coredns-bd6b6df9f-vmgbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:32.687: INFO: The status of Pod kindnet-7sfgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:32.687: INFO: The status of Pod kindnet-sm7fl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:32.687: INFO: The status of Pod kube-proxy-2r5xw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:32.687: INFO: The status of Pod kube-proxy-jshk5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:32.687: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Jan 29 15:01:32.687: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 29 15:01:32.687: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:01:32.687: INFO: coredns-bd6b6df9f-2x422 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC }] Jan 29 15:01:32.687: INFO: coredns-bd6b6df9f-vmgbx k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:40 +0000 UTC }] Jan 29 15:01:32.687: INFO: kindnet-7sfgk k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:32.687: INFO: kindnet-sm7fl k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:29 +0000 UTC }] Jan 29 15:01:32.687: INFO: kube-proxy-2r5xw k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:32.687: INFO: kube-proxy-jshk5 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC }] Jan 29 15:01:32.687: INFO: Jan 29 15:01:34.684: INFO: The status of Pod coredns-bd6b6df9f-2x422 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:34.684: INFO: The status of Pod coredns-bd6b6df9f-vmgbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:34.684: INFO: The status of Pod kindnet-7sfgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:34.684: INFO: The status of Pod kindnet-sm7fl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:34.685: INFO: The status of Pod kube-proxy-2r5xw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:34.685: INFO: The status of Pod kube-proxy-jshk5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:34.685: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Jan 29 15:01:34.685: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 29 15:01:34.685: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:01:34.685: INFO: coredns-bd6b6df9f-2x422 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC }] Jan 29 15:01:34.685: INFO: coredns-bd6b6df9f-vmgbx k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:40 +0000 UTC }] Jan 29 15:01:34.685: INFO: kindnet-7sfgk k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:34.685: INFO: kindnet-sm7fl k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:29 +0000 UTC }] Jan 29 15:01:34.685: INFO: kube-proxy-2r5xw k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:34.685: INFO: kube-proxy-jshk5 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC }] Jan 29 15:01:34.685: INFO: Jan 29 15:01:36.680: INFO: The status of Pod coredns-bd6b6df9f-2x422 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:36.680: INFO: The status of Pod coredns-bd6b6df9f-vmgbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:36.680: INFO: The status of Pod kindnet-7sfgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:36.680: INFO: The status of Pod kindnet-sm7fl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:36.680: INFO: The status of Pod kube-proxy-2r5xw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:36.680: INFO: The status of Pod kube-proxy-jshk5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:36.680: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Jan 29 15:01:36.680: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 29 15:01:36.680: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:01:36.680: INFO: coredns-bd6b6df9f-2x422 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC }] Jan 29 15:01:36.680: INFO: coredns-bd6b6df9f-vmgbx k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:40 +0000 UTC }] Jan 29 15:01:36.680: INFO: kindnet-7sfgk k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:36.680: INFO: kindnet-sm7fl k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:29 +0000 UTC }] Jan 29 15:01:36.680: INFO: kube-proxy-2r5xw k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:36.680: INFO: kube-proxy-jshk5 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC }] Jan 29 15:01:36.680: INFO: Jan 29 15:01:38.688: INFO: The status of Pod coredns-bd6b6df9f-2x422 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:38.688: INFO: The status of Pod coredns-bd6b6df9f-vmgbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:38.688: INFO: The status of Pod kindnet-7sfgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:38.689: INFO: The status of Pod kindnet-sm7fl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:38.689: INFO: The status of Pod kube-proxy-2r5xw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:38.689: INFO: The status of Pod kube-proxy-jshk5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:38.689: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Jan 29 15:01:38.689: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 29 15:01:38.689: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:01:38.689: INFO: coredns-bd6b6df9f-2x422 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC }] Jan 29 15:01:38.689: INFO: coredns-bd6b6df9f-vmgbx k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:40 +0000 UTC }] Jan 29 15:01:38.689: INFO: kindnet-7sfgk k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:38.689: INFO: kindnet-sm7fl k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:29 +0000 UTC }] Jan 29 15:01:38.689: INFO: kube-proxy-2r5xw k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:38.689: INFO: kube-proxy-jshk5 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC }] Jan 29 15:01:38.689: INFO: Jan 29 15:01:40.683: INFO: The status of Pod coredns-bd6b6df9f-2x422 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:40.683: INFO: The status of Pod coredns-bd6b6df9f-vmgbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:40.683: INFO: The status of Pod kindnet-7sfgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:40.683: INFO: The status of Pod kindnet-sm7fl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:40.683: INFO: The status of Pod kube-proxy-2r5xw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:40.683: INFO: The status of Pod kube-proxy-jshk5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:40.683: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Jan 29 15:01:40.683: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 29 15:01:40.683: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:01:40.683: INFO: coredns-bd6b6df9f-2x422 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC }] Jan 29 15:01:40.683: INFO: coredns-bd6b6df9f-vmgbx k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:40 +0000 UTC }] Jan 29 15:01:40.683: INFO: kindnet-7sfgk k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:40.683: INFO: kindnet-sm7fl k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:29 +0000 UTC }] Jan 29 15:01:40.683: INFO: kube-proxy-2r5xw k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:40.684: INFO: kube-proxy-jshk5 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC }] Jan 29 15:01:40.684: INFO: Jan 29 15:01:42.685: INFO: The status of Pod coredns-bd6b6df9f-2x422 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:42.685: INFO: The status of Pod coredns-bd6b6df9f-vmgbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:42.685: INFO: The status of Pod kindnet-7sfgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:42.685: INFO: The status of Pod kindnet-sm7fl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:42.685: INFO: The status of Pod kube-proxy-2r5xw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:42.685: INFO: The status of Pod kube-proxy-jshk5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:42.685: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Jan 29 15:01:42.685: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 29 15:01:42.685: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:01:42.685: INFO: coredns-bd6b6df9f-2x422 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC }] Jan 29 15:01:42.685: INFO: coredns-bd6b6df9f-vmgbx k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:40 +0000 UTC }] Jan 29 15:01:42.685: INFO: kindnet-7sfgk k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:42.685: INFO: kindnet-sm7fl k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:29 +0000 UTC }] Jan 29 15:01:42.685: INFO: kube-proxy-2r5xw k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:42.685: INFO: kube-proxy-jshk5 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC }] Jan 29 15:01:42.685: INFO: Jan 29 15:01:44.686: INFO: The status of Pod coredns-bd6b6df9f-2x422 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:44.687: INFO: The status of Pod coredns-bd6b6df9f-vmgbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:44.687: INFO: The status of Pod kindnet-7sfgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:44.687: INFO: The status of Pod kindnet-sm7fl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:44.687: INFO: The status of Pod kube-proxy-2r5xw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:44.687: INFO: The status of Pod kube-proxy-jshk5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:44.687: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Jan 29 15:01:44.687: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 29 15:01:44.687: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:01:44.687: INFO: coredns-bd6b6df9f-2x422 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC }] Jan 29 15:01:44.687: INFO: coredns-bd6b6df9f-vmgbx k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:40 +0000 UTC }] Jan 29 15:01:44.687: INFO: kindnet-7sfgk k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:44.687: INFO: kindnet-sm7fl k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:29 +0000 UTC }] Jan 29 15:01:44.687: INFO: kube-proxy-2r5xw k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:44.687: INFO: kube-proxy-jshk5 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC }] Jan 29 15:01:44.687: INFO: Jan 29 15:01:46.677: INFO: The status of Pod coredns-bd6b6df9f-2x422 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:46.678: INFO: The status of Pod coredns-bd6b6df9f-vmgbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:46.678: INFO: The status of Pod kindnet-7sfgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:46.678: INFO: The status of Pod kindnet-sm7fl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:46.678: INFO: The status of Pod kube-proxy-2r5xw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:46.678: INFO: The status of Pod kube-proxy-jshk5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:46.678: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Jan 29 15:01:46.678: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 29 15:01:46.678: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:01:46.678: INFO: coredns-bd6b6df9f-2x422 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC }] Jan 29 15:01:46.678: INFO: coredns-bd6b6df9f-vmgbx k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:40 +0000 UTC }] Jan 29 15:01:46.678: INFO: kindnet-7sfgk k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:46.678: INFO: kindnet-sm7fl k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:29 +0000 UTC }] Jan 29 15:01:46.678: INFO: kube-proxy-2r5xw k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:56:37 +0000 UTC }] Jan 29 15:01:46.678: INFO: kube-proxy-jshk5 k8s-upgrade-and-conformance-pw1vby-worker-gcr2ol Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:59:29 +0000 UTC }] Jan 29 15:01:46.678: INFO: Jan 29 15:01:48.695: INFO: The status of Pod coredns-bd6b6df9f-srfwd is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:48.695: INFO: The status of Pod coredns-bd6b6df9f-vmgbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:01:48.695: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Jan 29 15:01:48.695: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 29 15:01:48.695: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:01:48.695: INFO: coredns-bd6b6df9f-srfwd k8s-upgrade-and-conformance-pw1vby-worker-biy623 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:01:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:01:48 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:01:48 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:01:48 +0000 UTC }] Jan 29 15:01:48.695: INFO: coredns-bd6b6df9f-vmgbx k8s-upgrade-and-conformance-pw1vby-worker-hbuo2r Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:00:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 14:58:40 +0000 UTC }] Jan 29 15:01:48.695: INFO: Jan 29 15:01:50.681: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Jan 29 15:01:50.681: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 29 15:01:50.681: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 29 15:01:50.687: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 29 15:01:50.687: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 29 15:01:50.687: INFO: e2e test version: v1.23.16 Jan 29 15:01:50.691: INFO: kube-apiserver version: v1.23.16 Jan 29 15:01:50.691: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:01:50.697: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 29 15:01:50.720: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:01:50.744: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 29 15:01:50.738: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:01:50.764: INFO: Cluster IP family: ipv4 Jan 29 15:01:50.737: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:01:50.765: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:01:50.810: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api W0129 15:01:50.858956 15 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 29 15:01:50.860: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 29 15:01:50.909: INFO: Waiting up to 5m0s for pod "downward-api-0285ce2f-1eec-4778-8c2b-06f8c331724b" in namespace "downward-api-97" to be "Succeeded or Failed" Jan 29 15:01:50.922: INFO: Pod "downward-api-0285ce2f-1eec-4778-8c2b-06f8c331724b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.271747ms Jan 29 15:01:52.929: INFO: Pod "downward-api-0285ce2f-1eec-4778-8c2b-06f8c331724b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020527863s Jan 29 15:01:54.937: INFO: Pod "downward-api-0285ce2f-1eec-4778-8c2b-06f8c331724b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028379688s Jan 29 15:01:56.943: INFO: Pod "downward-api-0285ce2f-1eec-4778-8c2b-06f8c331724b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034250201s Jan 29 15:01:58.949: INFO: Pod "downward-api-0285ce2f-1eec-4778-8c2b-06f8c331724b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040299678s �[1mSTEP�[0m: Saw pod success Jan 29 15:01:58.949: INFO: Pod "downward-api-0285ce2f-1eec-4778-8c2b-06f8c331724b" satisfied condition "Succeeded or Failed" Jan 29 15:01:58.953: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod downward-api-0285ce2f-1eec-4778-8c2b-06f8c331724b container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:01:58.994: INFO: Waiting for pod downward-api-0285ce2f-1eec-4778-8c2b-06f8c331724b to disappear Jan 29 15:01:58.999: INFO: Pod downward-api-0285ce2f-1eec-4778-8c2b-06f8c331724b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:01:58.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-97" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":27,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:01:50.832: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir W0129 15:01:50.871887 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 29 15:01:50.872: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Jan 29 15:01:50.929: INFO: Waiting up to 5m0s for pod "pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962" in namespace "emptydir-7709" to be "Succeeded or Failed" Jan 29 15:01:50.941: INFO: Pod "pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962": Phase="Pending", Reason="", readiness=false. Elapsed: 11.796259ms Jan 29 15:01:52.947: INFO: Pod "pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017505171s Jan 29 15:01:54.955: INFO: Pod "pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025035475s Jan 29 15:01:56.961: INFO: Pod "pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031082193s Jan 29 15:01:58.970: INFO: Pod "pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040313354s Jan 29 15:02:00.976: INFO: Pod "pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.046548792s �[1mSTEP�[0m: Saw pod success Jan 29 15:02:00.976: INFO: Pod "pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962" satisfied condition "Succeeded or Failed" Jan 29 15:02:00.981: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:02:01.035: INFO: Waiting for pod pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962 to disappear Jan 29 15:02:01.040: INFO: Pod pod-e638efa3-fc46-47dd-9ec1-9f00cf7c4962 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:01.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7709" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:01:50.848: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected W0129 15:01:50.903096 20 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 29 15:01:50.903: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-948c7353-0969-49c3-92ed-d4858b56f710 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 29 15:01:50.974: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf" in namespace "projected-8017" to be "Succeeded or Failed" Jan 29 15:01:50.982: INFO: Pod "pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313589ms Jan 29 15:01:52.989: INFO: Pod "pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015288947s Jan 29 15:01:54.997: INFO: Pod "pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02329091s Jan 29 15:01:57.006: INFO: Pod "pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031964562s Jan 29 15:01:59.012: INFO: Pod "pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038822083s Jan 29 15:02:01.018: INFO: Pod "pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044113916s �[1mSTEP�[0m: Saw pod success Jan 29 15:02:01.018: INFO: Pod "pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf" satisfied condition "Succeeded or Failed" Jan 29 15:02:01.023: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527 pod pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:02:01.071: INFO: Waiting for pod pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf to disappear Jan 29 15:02:01.076: INFO: Pod pod-projected-secrets-24e42fbc-60bc-4970-9452-c389dc785cbf no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:01.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8017" for this suite. �[32m•�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":21,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:01.146: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test service account token: Jan 29 15:02:01.214: INFO: Waiting up to 5m0s for pod "test-pod-572ae652-2581-44a3-bed1-12e4610796d6" in namespace "svcaccounts-9386" to be "Succeeded or Failed" Jan 29 15:02:01.223: INFO: Pod "test-pod-572ae652-2581-44a3-bed1-12e4610796d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.640158ms Jan 29 15:02:03.231: INFO: Pod "test-pod-572ae652-2581-44a3-bed1-12e4610796d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01602832s Jan 29 15:02:05.236: INFO: Pod "test-pod-572ae652-2581-44a3-bed1-12e4610796d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021307712s �[1mSTEP�[0m: Saw pod success Jan 29 15:02:05.236: INFO: Pod "test-pod-572ae652-2581-44a3-bed1-12e4610796d6" satisfied condition "Succeeded or Failed" Jan 29 15:02:05.241: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod test-pod-572ae652-2581-44a3-bed1-12e4610796d6 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:02:05.263: INFO: Waiting for pod test-pod-572ae652-2581-44a3-bed1-12e4610796d6 to disappear Jan 29 15:02:05.266: INFO: Pod test-pod-572ae652-2581-44a3-bed1-12e4610796d6 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:05.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-9386" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":2,"skipped":31,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:01:59.049: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 29 15:01:59.127: INFO: The status of Pod pod-update-activedeadlineseconds-eb704f9a-4230-48f2-9be5-6fdd6999b62e is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:02:01.137: INFO: The status of Pod pod-update-activedeadlineseconds-eb704f9a-4230-48f2-9be5-6fdd6999b62e is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:02:03.134: INFO: The status of Pod pod-update-activedeadlineseconds-eb704f9a-4230-48f2-9be5-6fdd6999b62e is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 29 15:02:03.662: INFO: Successfully updated pod "pod-update-activedeadlineseconds-eb704f9a-4230-48f2-9be5-6fdd6999b62e" Jan 29 15:02:03.662: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-eb704f9a-4230-48f2-9be5-6fdd6999b62e" in namespace "pods-4768" to be "terminated due to deadline exceeded" Jan 29 15:02:03.668: INFO: Pod "pod-update-activedeadlineseconds-eb704f9a-4230-48f2-9be5-6fdd6999b62e": Phase="Running", Reason="", readiness=true. Elapsed: 5.809978ms Jan 29 15:02:05.675: INFO: Pod "pod-update-activedeadlineseconds-eb704f9a-4230-48f2-9be5-6fdd6999b62e": Phase="Running", Reason="", readiness=false. Elapsed: 2.012876632s Jan 29 15:02:07.681: INFO: Pod "pod-update-activedeadlineseconds-eb704f9a-4230-48f2-9be5-6fdd6999b62e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.018327911s Jan 29 15:02:07.681: INFO: Pod "pod-update-activedeadlineseconds-eb704f9a-4230-48f2-9be5-6fdd6999b62e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:07.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4768" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":35,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:01.282: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Pod with a static label �[1mSTEP�[0m: watching for Pod to be ready Jan 29 15:02:01.357: INFO: observed Pod pod-test in namespace pods-8759 in phase Pending with labels: map[test-pod-static:true] & conditions [] Jan 29 15:02:01.361: INFO: observed Pod pod-test in namespace pods-8759 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:02:01 +0000 UTC }] Jan 29 15:02:01.382: INFO: observed Pod pod-test in namespace pods-8759 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:02:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:02:01 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:02:01 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:02:01 +0000 UTC }] Jan 29 15:02:06.258: INFO: Found Pod pod-test in namespace pods-8759 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:02:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:02:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:02:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:02:01 +0000 UTC }] �[1mSTEP�[0m: patching the Pod with a new Label and updated data Jan 29 15:02:06.281: INFO: observed event type ADDED �[1mSTEP�[0m: getting the Pod and ensuring that it's patched �[1mSTEP�[0m: replacing the Pod's status Ready condition to False �[1mSTEP�[0m: check the Pod again to ensure its Ready conditions are False �[1mSTEP�[0m: deleting the Pod via a Collection with a LabelSelector �[1mSTEP�[0m: watching for the Pod to be deleted Jan 29 15:02:06.371: INFO: observed event type ADDED Jan 29 15:02:06.371: INFO: observed event type MODIFIED Jan 29 15:02:06.372: INFO: observed event type MODIFIED Jan 29 15:02:06.372: INFO: observed event type MODIFIED Jan 29 15:02:06.372: INFO: observed event type MODIFIED Jan 29 15:02:06.372: INFO: observed event type MODIFIED Jan 29 15:02:06.372: INFO: observed event type MODIFIED Jan 29 15:02:08.238: INFO: observed event type MODIFIED Jan 29 15:02:09.263: INFO: observed event type MODIFIED Jan 29 15:02:09.282: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:09.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8759" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":2,"skipped":87,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:09.418: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:09.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5895" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":3,"skipped":125,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:09.501: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 29 15:02:09.560: INFO: Waiting up to 5m0s for pod "pod-ece4198b-dbda-4303-a61f-59428c2a8693" in namespace "emptydir-7081" to be "Succeeded or Failed" Jan 29 15:02:09.568: INFO: Pod "pod-ece4198b-dbda-4303-a61f-59428c2a8693": Phase="Pending", Reason="", readiness=false. Elapsed: 8.706583ms Jan 29 15:02:11.575: INFO: Pod "pod-ece4198b-dbda-4303-a61f-59428c2a8693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014976499s Jan 29 15:02:13.584: INFO: Pod "pod-ece4198b-dbda-4303-a61f-59428c2a8693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024192641s �[1mSTEP�[0m: Saw pod success Jan 29 15:02:13.584: INFO: Pod "pod-ece4198b-dbda-4303-a61f-59428c2a8693" satisfied condition "Succeeded or Failed" Jan 29 15:02:13.594: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-ece4198b-dbda-4303-a61f-59428c2a8693 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:02:13.628: INFO: Waiting for pod pod-ece4198b-dbda-4303-a61f-59428c2a8693 to disappear Jan 29 15:02:13.632: INFO: Pod pod-ece4198b-dbda-4303-a61f-59428c2a8693 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:13.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7081" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":129,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:05.425: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-6525 �[1mSTEP�[0m: changing the ExternalName service to type=ClusterIP �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-6525 I0129 15:02:05.658239 20 runners.go:193] Created replication controller with name: externalname-service, namespace: services-6525, replica count: 2 I0129 15:02:08.710510 20 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 29 15:02:08.710: INFO: Creating new exec pod Jan 29 15:02:11.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6525 exec execpodgzbs9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 29 15:02:12.247: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 29 15:02:12.247: INFO: stdout: "externalname-service-nkwzf" Jan 29 15:02:12.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6525 exec execpodgzbs9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.251.239 80' Jan 29 15:02:14.525: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.251.239 80\nConnection to 10.133.251.239 80 port [tcp/http] succeeded!\n" Jan 29 15:02:14.526: INFO: stdout: "" Jan 29 15:02:15.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6525 exec execpodgzbs9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.251.239 80' Jan 29 15:02:15.825: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.251.239 80\nConnection to 10.133.251.239 80 port [tcp/http] succeeded!\n" Jan 29 15:02:15.825: INFO: stdout: "externalname-service-nkwzf" Jan 29 15:02:15.825: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:15.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6525" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":3,"skipped":78,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:13.676: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-2beebd49-0177-47cd-b3af-0d3277f15bae �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 29 15:02:13.724: INFO: Waiting up to 5m0s for pod "pod-configmaps-26583f12-331d-4ee4-a480-be50dadc65ef" in namespace "configmap-4539" to be "Succeeded or Failed" Jan 29 15:02:13.728: INFO: Pod "pod-configmaps-26583f12-331d-4ee4-a480-be50dadc65ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364712ms Jan 29 15:02:15.736: INFO: Pod "pod-configmaps-26583f12-331d-4ee4-a480-be50dadc65ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011905051s Jan 29 15:02:17.742: INFO: Pod "pod-configmaps-26583f12-331d-4ee4-a480-be50dadc65ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018171787s �[1mSTEP�[0m: Saw pod success Jan 29 15:02:17.742: INFO: Pod "pod-configmaps-26583f12-331d-4ee4-a480-be50dadc65ef" satisfied condition "Succeeded or Failed" Jan 29 15:02:17.746: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-configmaps-26583f12-331d-4ee4-a480-be50dadc65ef container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:02:17.771: INFO: Waiting for pod pod-configmaps-26583f12-331d-4ee4-a480-be50dadc65ef to disappear Jan 29 15:02:17.776: INFO: Pod pod-configmaps-26583f12-331d-4ee4-a480-be50dadc65ef no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:17.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4539" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":140,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:17.819: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Getting a ResourceQuota �[1mSTEP�[0m: Updating a ResourceQuota �[1mSTEP�[0m: Verifying a ResourceQuota was modified �[1mSTEP�[0m: Deleting a ResourceQuota �[1mSTEP�[0m: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:17.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-8259" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":6,"skipped":151,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:01:50.928: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath W0129 15:01:50.996823 16 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 29 15:01:50.997: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-46nv �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 29 15:01:51.037: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-46nv" in namespace "subpath-7839" to be "Succeeded or Failed" Jan 29 15:01:51.042: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.650818ms Jan 29 15:01:53.050: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012642841s Jan 29 15:01:55.057: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020356129s Jan 29 15:01:57.064: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02727731s Jan 29 15:01:59.073: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=true. Elapsed: 8.035557625s Jan 29 15:02:01.079: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=true. Elapsed: 10.042447238s Jan 29 15:02:03.086: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=true. Elapsed: 12.049388263s Jan 29 15:02:05.093: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=true. Elapsed: 14.056019323s Jan 29 15:02:07.105: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=true. Elapsed: 16.067756381s Jan 29 15:02:09.121: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=true. Elapsed: 18.08410999s Jan 29 15:02:11.131: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=true. Elapsed: 20.094083474s Jan 29 15:02:13.137: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=true. Elapsed: 22.099901074s Jan 29 15:02:15.145: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=true. Elapsed: 24.107720236s Jan 29 15:02:17.153: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=true. Elapsed: 26.115596637s Jan 29 15:02:19.160: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Running", Reason="", readiness=false. Elapsed: 28.122947545s Jan 29 15:02:21.167: INFO: Pod "pod-subpath-test-downwardapi-46nv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.129534833s �[1mSTEP�[0m: Saw pod success Jan 29 15:02:21.167: INFO: Pod "pod-subpath-test-downwardapi-46nv" satisfied condition "Succeeded or Failed" Jan 29 15:02:21.183: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod pod-subpath-test-downwardapi-46nv container test-container-subpath-downwardapi-46nv: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:02:21.257: INFO: Waiting for pod pod-subpath-test-downwardapi-46nv to disappear Jan 29 15:02:21.268: INFO: Pod pod-subpath-test-downwardapi-46nv no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-46nv Jan 29 15:02:21.268: INFO: Deleting pod "pod-subpath-test-downwardapi-46nv" in namespace "subpath-7839" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:21.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-7839" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":1,"skipped":38,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:21.374: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:21.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2132" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":2,"skipped":56,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:21.690: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Starting the proxy Jan 29 15:02:21.725: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3429 proxy --unix-socket=/tmp/kubectl-proxy-unix54566889/test' �[1mSTEP�[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:21.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3429" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":3,"skipped":88,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:18.022: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 29 15:02:19.086: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 29 15:02:22.130: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a mutating webhook configuration �[1mSTEP�[0m: Updating a mutating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that should not be mutated �[1mSTEP�[0m: Patching a mutating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:22.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-151" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-151-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":7,"skipped":194,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:15.902: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 29 15:02:16.011: INFO: The status of Pod annotationupdate399b76c8-b70f-4e78-9048-5e94a83da839 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:02:18.021: INFO: The status of Pod annotationupdate399b76c8-b70f-4e78-9048-5e94a83da839 is Running (Ready = true) Jan 29 15:02:18.556: INFO: Successfully updated pod "annotationupdate399b76c8-b70f-4e78-9048-5e94a83da839" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:22.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4565" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":80,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:22.407: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: starting the proxy server Jan 29 15:02:22.465: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7681 proxy -p 0 --disable-filter' �[1mSTEP�[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:22.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7681" for this suite. �[32m•�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":8,"skipped":200,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:21.867: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 29 15:02:21.926: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65520227-690f-4520-ba8a-e53b54c713be" in namespace "downward-api-1582" to be "Succeeded or Failed" Jan 29 15:02:21.932: INFO: Pod "downwardapi-volume-65520227-690f-4520-ba8a-e53b54c713be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041775ms Jan 29 15:02:23.939: INFO: Pod "downwardapi-volume-65520227-690f-4520-ba8a-e53b54c713be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012921129s Jan 29 15:02:25.951: INFO: Pod "downwardapi-volume-65520227-690f-4520-ba8a-e53b54c713be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025052318s �[1mSTEP�[0m: Saw pod success Jan 29 15:02:25.951: INFO: Pod "downwardapi-volume-65520227-690f-4520-ba8a-e53b54c713be" satisfied condition "Succeeded or Failed" Jan 29 15:02:25.971: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod downwardapi-volume-65520227-690f-4520-ba8a-e53b54c713be container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:02:26.018: INFO: Waiting for pod downwardapi-volume-65520227-690f-4520-ba8a-e53b54c713be to disappear Jan 29 15:02:26.036: INFO: Pod downwardapi-volume-65520227-690f-4520-ba8a-e53b54c713be no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:26.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1582" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":89,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:26.299: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics Jan 29 15:02:27.549: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-pw1vby-8nwgl-sl9bk is Running (Ready = true) Jan 29 15:02:27.748: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:27.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-8637" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":5,"skipped":136,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:22.714: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svc-latency �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:02:22.769: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating replication controller svc-latency-rc in namespace svc-latency-1339 I0129 15:02:22.793309 20 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1339, replica count: 1 I0129 15:02:23.845404 20 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 15:02:24.845783 20 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 29 15:02:24.964: INFO: Created: latency-svc-t8vw5 Jan 29 15:02:24.977: INFO: Got endpoints: latency-svc-t8vw5 [31.450332ms] Jan 29 15:02:25.026: INFO: Created: latency-svc-hhlcr Jan 29 15:02:25.048: INFO: Got endpoints: latency-svc-hhlcr [69.887663ms] Jan 29 15:02:25.117: INFO: Created: latency-svc-lw9lp Jan 29 15:02:25.117: INFO: Got endpoints: latency-svc-lw9lp [137.885889ms] Jan 29 15:02:25.137: INFO: Created: latency-svc-hl6js Jan 29 15:02:25.154: INFO: Got endpoints: latency-svc-hl6js [174.100392ms] Jan 29 15:02:25.259: INFO: Created: latency-svc-m294d Jan 29 15:02:25.290: INFO: Got endpoints: latency-svc-m294d [311.954248ms] Jan 29 15:02:25.315: INFO: Created: latency-svc-8vh6j Jan 29 15:02:25.321: INFO: Got endpoints: latency-svc-8vh6j [339.361434ms] Jan 29 15:02:25.346: INFO: Created: latency-svc-77qp2 Jan 29 15:02:25.364: INFO: Got endpoints: latency-svc-77qp2 [382.963498ms] Jan 29 15:02:25.380: INFO: Created: latency-svc-pdfwb Jan 29 15:02:25.391: INFO: Got endpoints: latency-svc-pdfwb [410.942607ms] Jan 29 15:02:25.785: INFO: Created: latency-svc-x9snm Jan 29 15:02:25.797: INFO: Created: latency-svc-8xghl Jan 29 15:02:25.798: INFO: Created: latency-svc-9dxbg Jan 29 15:02:25.799: INFO: Created: latency-svc-r6w6p Jan 29 15:02:25.799: INFO: Created: latency-svc-6srmd Jan 29 15:02:25.799: INFO: Created: latency-svc-smfsj Jan 29 15:02:25.800: INFO: Created: latency-svc-xc86n Jan 29 15:02:25.805: INFO: Created: latency-svc-wb9qc Jan 29 15:02:25.807: INFO: Created: latency-svc-rgtp8 Jan 29 15:02:25.809: INFO: Created: latency-svc-mr255 Jan 29 15:02:25.812: INFO: Created: latency-svc-6zz4v Jan 29 15:02:25.818: INFO: Created: latency-svc-66qdj Jan 29 15:02:25.828: INFO: Created: latency-svc-cn4rp Jan 29 15:02:25.829: INFO: Created: latency-svc-zr58m Jan 29 15:02:25.831: INFO: Created: latency-svc-klsq7 Jan 29 15:02:25.849: INFO: Got endpoints: latency-svc-mr255 [870.574291ms] Jan 29 15:02:25.857: INFO: Got endpoints: latency-svc-x9snm [875.453579ms] Jan 29 15:02:25.857: INFO: Got endpoints: latency-svc-6zz4v [876.059299ms] Jan 29 15:02:25.857: INFO: Got endpoints: latency-svc-zr58m [876.864904ms] Jan 29 15:02:25.861: INFO: Got endpoints: latency-svc-klsq7 [540.531447ms] Jan 29 15:02:25.868: INFO: Got endpoints: latency-svc-smfsj [820.573147ms] Jan 29 15:02:25.892: INFO: Created: latency-svc-8k5p2 Jan 29 15:02:25.892: INFO: Got endpoints: latency-svc-6srmd [774.86938ms] Jan 29 15:02:25.892: INFO: Got endpoints: latency-svc-r6w6p [912.150072ms] Jan 29 15:02:25.892: INFO: Got endpoints: latency-svc-xc86n [501.052472ms] Jan 29 15:02:25.902: INFO: Got endpoints: latency-svc-9dxbg [747.90185ms] Jan 29 15:02:25.910: INFO: Got endpoints: latency-svc-cn4rp [620.02994ms] Jan 29 15:02:25.910: INFO: Got endpoints: latency-svc-8xghl [933.057356ms] Jan 29 15:02:25.911: INFO: Got endpoints: latency-svc-wb9qc [546.474331ms] Jan 29 15:02:25.911: INFO: Got endpoints: latency-svc-66qdj [932.503329ms] Jan 29 15:02:25.922: INFO: Created: latency-svc-ljqp8 Jan 29 15:02:25.930: INFO: Got endpoints: latency-svc-8k5p2 [80.649192ms] Jan 29 15:02:25.933: INFO: Got endpoints: latency-svc-ljqp8 [75.873716ms] Jan 29 15:02:25.943: INFO: Got endpoints: latency-svc-rgtp8 [964.510945ms] Jan 29 15:02:25.947: INFO: Created: latency-svc-j7dgl Jan 29 15:02:25.968: INFO: Created: latency-svc-zw7gf Jan 29 15:02:25.968: INFO: Got endpoints: latency-svc-j7dgl [111.822485ms] Jan 29 15:02:25.984: INFO: Got endpoints: latency-svc-zw7gf [122.934992ms] Jan 29 15:02:25.994: INFO: Created: latency-svc-p8ht5 Jan 29 15:02:26.004: INFO: Got endpoints: latency-svc-p8ht5 [136.008142ms] Jan 29 15:02:26.018: INFO: Created: latency-svc-4trpc Jan 29 15:02:26.036: INFO: Got endpoints: latency-svc-4trpc [178.67954ms] Jan 29 15:02:26.041: INFO: Created: latency-svc-5ksg2 Jan 29 15:02:26.054: INFO: Got endpoints: latency-svc-5ksg2 [161.829473ms] Jan 29 15:02:26.059: INFO: Created: latency-svc-zfksk Jan 29 15:02:26.083: INFO: Got endpoints: latency-svc-zfksk [191.375939ms] Jan 29 15:02:26.089: INFO: Created: latency-svc-llbdp Jan 29 15:02:26.099: INFO: Got endpoints: latency-svc-llbdp [207.050396ms] Jan 29 15:02:26.111: INFO: Created: latency-svc-vqw2v Jan 29 15:02:26.117: INFO: Got endpoints: latency-svc-vqw2v [215.653788ms] Jan 29 15:02:26.136: INFO: Created: latency-svc-vm5nh Jan 29 15:02:26.156: INFO: Got endpoints: latency-svc-vm5nh [245.50474ms] Jan 29 15:02:26.227: INFO: Created: latency-svc-5mrt8 Jan 29 15:02:26.227: INFO: Got endpoints: latency-svc-5mrt8 [315.935286ms] Jan 29 15:02:26.245: INFO: Created: latency-svc-lhtzc Jan 29 15:02:26.253: INFO: Got endpoints: latency-svc-lhtzc [340.686912ms] Jan 29 15:02:26.269: INFO: Created: latency-svc-f25mx Jan 29 15:02:26.332: INFO: Created: latency-svc-2jkfm Jan 29 15:02:26.336: INFO: Got endpoints: latency-svc-f25mx [406.663706ms] Jan 29 15:02:26.341: INFO: Got endpoints: latency-svc-2jkfm [407.618442ms] Jan 29 15:02:26.368: INFO: Created: latency-svc-wkrjm Jan 29 15:02:26.384: INFO: Got endpoints: latency-svc-wkrjm [440.557203ms] Jan 29 15:02:26.403: INFO: Created: latency-svc-plc74 Jan 29 15:02:26.414: INFO: Got endpoints: latency-svc-plc74 [445.751763ms] Jan 29 15:02:26.459: INFO: Created: latency-svc-2jgrq Jan 29 15:02:26.482: INFO: Got endpoints: latency-svc-2jgrq [497.641742ms] Jan 29 15:02:26.525: INFO: Created: latency-svc-6b5ff Jan 29 15:02:26.574: INFO: Got endpoints: latency-svc-6b5ff [568.511977ms] Jan 29 15:02:26.596: INFO: Created: latency-svc-q6qmh Jan 29 15:02:26.609: INFO: Got endpoints: latency-svc-q6qmh [572.757029ms] Jan 29 15:02:26.628: INFO: Created: latency-svc-n7dnf Jan 29 15:02:26.640: INFO: Got endpoints: latency-svc-n7dnf [586.445597ms] Jan 29 15:02:26.659: INFO: Created: latency-svc-bwvgb Jan 29 15:02:26.667: INFO: Got endpoints: latency-svc-bwvgb [583.837703ms] Jan 29 15:02:26.688: INFO: Created: latency-svc-dgp2h Jan 29 15:02:26.704: INFO: Got endpoints: latency-svc-dgp2h [604.553784ms] Jan 29 15:02:26.710: INFO: Created: latency-svc-hqfdk Jan 29 15:02:26.714: INFO: Got endpoints: latency-svc-hqfdk [596.219383ms] Jan 29 15:02:26.739: INFO: Created: latency-svc-khfrq Jan 29 15:02:26.740: INFO: Created: latency-svc-64wbq Jan 29 15:02:26.743: INFO: Got endpoints: latency-svc-khfrq [587.526202ms] Jan 29 15:02:26.746: INFO: Created: latency-svc-764g2 Jan 29 15:02:26.748: INFO: Got endpoints: latency-svc-64wbq [835.859814ms] Jan 29 15:02:26.774: INFO: Got endpoints: latency-svc-764g2 [546.719179ms] Jan 29 15:02:26.794: INFO: Created: latency-svc-gsz6s Jan 29 15:02:26.810: INFO: Created: latency-svc-92jrk Jan 29 15:02:26.826: INFO: Got endpoints: latency-svc-92jrk [489.303541ms] Jan 29 15:02:26.826: INFO: Got endpoints: latency-svc-gsz6s [573.51868ms] Jan 29 15:02:26.830: INFO: Created: latency-svc-crt4z Jan 29 15:02:26.846: INFO: Got endpoints: latency-svc-crt4z [503.28155ms] Jan 29 15:02:26.890: INFO: Created: latency-svc-kw8sb Jan 29 15:02:26.913: INFO: Got endpoints: latency-svc-kw8sb [529.617741ms] Jan 29 15:02:26.927: INFO: Created: latency-svc-nckfz Jan 29 15:02:26.951: INFO: Created: latency-svc-vl9ss Jan 29 15:02:26.954: INFO: Got endpoints: latency-svc-nckfz [539.944165ms] Jan 29 15:02:26.970: INFO: Got endpoints: latency-svc-vl9ss [488.286595ms] Jan 29 15:02:26.985: INFO: Created: latency-svc-cbgfp Jan 29 15:02:26.986: INFO: Got endpoints: latency-svc-cbgfp [412.263125ms] Jan 29 15:02:27.001: INFO: Created: latency-svc-lnrsm Jan 29 15:02:27.024: INFO: Created: latency-svc-2vz9v Jan 29 15:02:27.024: INFO: Got endpoints: latency-svc-lnrsm [415.545054ms] Jan 29 15:02:27.044: INFO: Got endpoints: latency-svc-2vz9v [403.643525ms] Jan 29 15:02:27.064: INFO: Created: latency-svc-kz94h Jan 29 15:02:27.134: INFO: Got endpoints: latency-svc-kz94h [466.212004ms] Jan 29 15:02:27.154: INFO: Created: latency-svc-rjhfq Jan 29 15:02:27.186: INFO: Created: latency-svc-wdbv9 Jan 29 15:02:27.234: INFO: Got endpoints: latency-svc-rjhfq [529.809508ms] Jan 29 15:02:27.242: INFO: Got endpoints: latency-svc-wdbv9 [528.479824ms] Jan 29 15:02:27.282: INFO: Created: latency-svc-mvd6g Jan 29 15:02:27.341: INFO: Got endpoints: latency-svc-mvd6g [597.722693ms] Jan 29 15:02:27.936: INFO: Created: latency-svc-rpxkd Jan 29 15:02:27.998: INFO: Created: latency-svc-hr28b Jan 29 15:02:27.998: INFO: Created: latency-svc-gjxt9 Jan 29 15:02:27.999: INFO: Created: latency-svc-qs2gh Jan 29 15:02:27.999: INFO: Created: latency-svc-98gqc Jan 29 15:02:27.999: INFO: Created: latency-svc-t2vb9 Jan 29 15:02:27.999: INFO: Created: latency-svc-m2pbl Jan 29 15:02:28.000: INFO: Created: latency-svc-pzf8x Jan 29 15:02:28.000: INFO: Created: latency-svc-n7dcj Jan 29 15:02:28.000: INFO: Created: latency-svc-mfjm9 Jan 29 15:02:28.001: INFO: Created: latency-svc-ng5q8 Jan 29 15:02:28.001: INFO: Created: latency-svc-xgnt9 Jan 29 15:02:28.001: INFO: Created: latency-svc-wv57l Jan 29 15:02:28.001: INFO: Created: latency-svc-4gf9m Jan 29 15:02:28.001: INFO: Created: latency-svc-dxgn4 Jan 29 15:02:28.012: INFO: Got endpoints: latency-svc-rpxkd [1.18557542s] Jan 29 15:02:28.037: INFO: Got endpoints: latency-svc-wv57l [1.211237036s] Jan 29 15:02:28.043: INFO: Got endpoints: latency-svc-xgnt9 [1.129554313s] Jan 29 15:02:28.043: INFO: Got endpoints: latency-svc-qs2gh [1.197141881s] Jan 29 15:02:28.043: INFO: Got endpoints: latency-svc-ng5q8 [1.089092115s] Jan 29 15:02:28.091: INFO: Got endpoints: latency-svc-dxgn4 [1.342921018s] Jan 29 15:02:28.091: INFO: Got endpoints: latency-svc-mfjm9 [1.120812872s] Jan 29 15:02:28.091: INFO: Got endpoints: latency-svc-n7dcj [1.105135196s] Jan 29 15:02:28.099: INFO: Got endpoints: latency-svc-hr28b [965.74083ms] Jan 29 15:02:28.100: INFO: Got endpoints: latency-svc-4gf9m [1.325679329s] Jan 29 15:02:28.137: INFO: Got endpoints: latency-svc-t2vb9 [894.291362ms] Jan 29 15:02:28.187: INFO: Created: latency-svc-vsd6z Jan 29 15:02:28.193: INFO: Got endpoints: latency-svc-pzf8x [1.149313711s] Jan 29 15:02:28.194: INFO: Got endpoints: latency-svc-98gqc [852.439427ms] Jan 29 15:02:28.194: INFO: Got endpoints: latency-svc-m2pbl [960.618789ms] Jan 29 15:02:28.195: INFO: Got endpoints: latency-svc-gjxt9 [1.170933492s] Jan 29 15:02:28.276: INFO: Got endpoints: latency-svc-vsd6z [263.857239ms] Jan 29 15:02:28.284: INFO: Created: latency-svc-kf2js Jan 29 15:02:28.332: INFO: Got endpoints: latency-svc-kf2js [295.038747ms] Jan 29 15:02:28.432: INFO: Created: latency-svc-tz7pl Jan 29 15:02:28.460: INFO: Got endpoints: latency-svc-tz7pl [405.402883ms] Jan 29 15:02:28.554: INFO: Created: latency-svc-fgrbx Jan 29 15:02:28.576: INFO: Got endpoints: latency-svc-fgrbx [521.323343ms] Jan 29 15:02:28.646: INFO: Created: latency-svc-md8sj Jan 29 15:02:28.700: INFO: Got endpoints: latency-svc-md8sj [644.906501ms] Jan 29 15:02:28.817: INFO: Created: latency-svc-fbmrk Jan 29 15:02:28.817: INFO: Got endpoints: latency-svc-fbmrk [705.008382ms] Jan 29 15:02:28.890: INFO: Created: latency-svc-tsl67 Jan 29 15:02:29.025: INFO: Created: latency-svc-sbm62 Jan 29 15:02:29.029: INFO: Got endpoints: latency-svc-tsl67 [917.963428ms] Jan 29 15:02:29.262: INFO: Created: latency-svc-pb29w Jan 29 15:02:29.263: INFO: Got endpoints: latency-svc-sbm62 [1.151604044s] Jan 29 15:02:29.329: INFO: Got endpoints: latency-svc-pb29w [1.217485535s] Jan 29 15:02:29.331: INFO: Created: latency-svc-l4w5x Jan 29 15:02:29.501: INFO: Got endpoints: latency-svc-l4w5x [1.389697038s] Jan 29 15:02:29.508: INFO: Created: latency-svc-j6922 Jan 29 15:02:29.703: INFO: Created: latency-svc-v6ggm Jan 29 15:02:29.707: INFO: Got endpoints: latency-svc-j6922 [1.570363301s] Jan 29 15:02:30.016: INFO: Got endpoints: latency-svc-v6ggm [1.823052892s] Jan 29 15:02:30.063: INFO: Created: latency-svc-gmv4q Jan 29 15:02:30.256: INFO: Got endpoints: latency-svc-gmv4q [2.060304433s] Jan 29 15:02:30.389: INFO: Created: latency-svc-hwm47 Jan 29 15:02:30.690: INFO: Got endpoints: latency-svc-hwm47 [2.496558267s] Jan 29 15:02:30.922: INFO: Created: latency-svc-dqb85 Jan 29 15:02:31.153: INFO: Got endpoints: latency-svc-dqb85 [2.958577349s] Jan 29 15:02:31.216: INFO: Created: latency-svc-bwkzp Jan 29 15:02:31.627: INFO: Got endpoints: latency-svc-bwkzp [3.351328813s] Jan 29 15:02:31.794: INFO: Created: latency-svc-gzft9 Jan 29 15:02:32.171: INFO: Created: latency-svc-5gt44 Jan 29 15:02:32.274: INFO: Got endpoints: latency-svc-gzft9 [3.941987652s] Jan 29 15:02:33.015: INFO: Got endpoints: latency-svc-5gt44 [4.554908831s] Jan 29 15:02:33.158: INFO: Created: latency-svc-dg47j Jan 29 15:02:33.207: INFO: Got endpoints: latency-svc-dg47j [4.63110021s] Jan 29 15:02:33.995: INFO: Created: latency-svc-8xdf8 Jan 29 15:02:34.146: INFO: Got endpoints: latency-svc-8xdf8 [5.446157209s] Jan 29 15:02:34.435: INFO: Created: latency-svc-ktqt5 Jan 29 15:02:34.594: INFO: Got endpoints: latency-svc-ktqt5 [5.776618526s] Jan 29 15:02:34.628: INFO: Created: latency-svc-d6cqb Jan 29 15:02:34.685: INFO: Got endpoints: latency-svc-d6cqb [5.656192279s] Jan 29 15:02:34.713: INFO: Created: latency-svc-f62xq Jan 29 15:02:34.721: INFO: Got endpoints: latency-svc-f62xq [5.391963634s] Jan 29 15:02:34.790: INFO: Created: latency-svc-s44v9 Jan 29 15:02:34.813: INFO: Got endpoints: latency-svc-s44v9 [5.549992549s] Jan 29 15:02:34.851: INFO: Created: latency-svc-vkvtb Jan 29 15:02:34.866: INFO: Got endpoints: latency-svc-vkvtb [5.364687737s] Jan 29 15:02:34.899: INFO: Created: latency-svc-fzjp8 Jan 29 15:02:34.947: INFO: Got endpoints: latency-svc-fzjp8 [5.239341342s] Jan 29 15:02:34.996: INFO: Created: latency-svc-jh4cn Jan 29 15:02:35.039: INFO: Got endpoints: latency-svc-jh4cn [5.022177171s] Jan 29 15:02:35.242: INFO: Created: latency-svc-tpph4 Jan 29 15:02:35.270: INFO: Got endpoints: latency-svc-tpph4 [5.014465474s] Jan 29 15:02:35.365: INFO: Created: latency-svc-pzmt2 Jan 29 15:02:35.402: INFO: Got endpoints: latency-svc-pzmt2 [4.711886938s] Jan 29 15:02:35.425: INFO: Created: latency-svc-gf48b Jan 29 15:02:35.477: INFO: Got endpoints: latency-svc-gf48b [4.3242914s] Jan 29 15:02:35.530: INFO: Created: latency-svc-ls4nl Jan 29 15:02:35.540: INFO: Created: latency-svc-xkg26 Jan 29 15:02:35.547: INFO: Created: latency-svc-zj5tt Jan 29 15:02:35.609: INFO: Got endpoints: latency-svc-zj5tt [2.593871201s] Jan 29 15:02:35.613: INFO: Got endpoints: latency-svc-ls4nl [3.985686843s] Jan 29 15:02:35.625: INFO: Got endpoints: latency-svc-xkg26 [3.351139028s] Jan 29 15:02:35.629: INFO: Created: latency-svc-td872 Jan 29 15:02:35.696: INFO: Got endpoints: latency-svc-td872 [2.48838444s] Jan 29 15:02:35.728: INFO: Created: latency-svc-rt6k4 Jan 29 15:02:35.728: INFO: Created: latency-svc-8nnwj Jan 29 15:02:35.741: INFO: Got endpoints: latency-svc-8nnwj [1.594891068s] Jan 29 15:02:35.742: INFO: Got endpoints: latency-svc-rt6k4 [1.148106786s] Jan 29 15:02:35.801: INFO: Created: latency-svc-7ngwb Jan 29 15:02:35.823: INFO: Got endpoints: latency-svc-7ngwb [1.137450591s] Jan 29 15:02:35.898: INFO: Created: latency-svc-7n85b Jan 29 15:02:35.898: INFO: Got endpoints: latency-svc-7n85b [1.177006977s] Jan 29 15:02:35.959: INFO: Created: latency-svc-4hn5t Jan 29 15:02:36.049: INFO: Got endpoints: latency-svc-4hn5t [1.23588708s] Jan 29 15:02:36.062: INFO: Created: latency-svc-hksxn Jan 29 15:02:36.217: INFO: Got endpoints: latency-svc-hksxn [1.351144909s] Jan 29 15:02:37.373: INFO: Created: latency-svc-blrzd Jan 29 15:02:37.374: INFO: Created: latency-svc-r7sl7 Jan 29 15:02:37.376: INFO: Created: latency-svc-gjdmd Jan 29 15:02:37.382: INFO: Created: latency-svc-4p4wl Jan 29 15:02:37.382: INFO: Created: latency-svc-rpmtd Jan 29 15:02:37.383: INFO: Created: latency-svc-9wkdb Jan 29 15:02:37.384: INFO: Created: latency-svc-c6f2k Jan 29 15:02:37.386: INFO: Created: latency-svc-t5zpn Jan 29 15:02:37.406: INFO: Created: latency-svc-pp6xx Jan 29 15:02:37.417: INFO: Created: latency-svc-7bc6m Jan 29 15:02:37.417: INFO: Created: latency-svc-4lqwk Jan 29 15:02:37.436: INFO: Got endpoints: latency-svc-gjdmd [2.489580208s] Jan 29 15:02:37.493: INFO: Got endpoints: latency-svc-blrzd [1.865663495s] Jan 29 15:02:37.511: INFO: Got endpoints: latency-svc-4p4wl [1.294542059s] Jan 29 15:02:37.523: INFO: Created: latency-svc-bdprw Jan 29 15:02:37.540: INFO: Created: latency-svc-fstr6 Jan 29 15:02:37.561: INFO: Created: latency-svc-7nffg Jan 29 15:02:37.562: INFO: Created: latency-svc-f5mzs Jan 29 15:02:37.732: INFO: Created: latency-svc-jgdl9 Jan 29 15:02:37.732: INFO: Got endpoints: latency-svc-rpmtd [2.692632338s] Jan 29 15:02:37.732: INFO: Got endpoints: latency-svc-c6f2k [2.32592351s] Jan 29 15:02:37.732: INFO: Got endpoints: latency-svc-f5mzs [1.9910122s] Jan 29 15:02:37.772: INFO: Got endpoints: latency-svc-4lqwk [2.501596741s] Jan 29 15:02:37.772: INFO: Got endpoints: latency-svc-pp6xx [2.075801427s] Jan 29 15:02:37.917: INFO: Created: latency-svc-5x4mk Jan 29 15:02:37.966: INFO: Got endpoints: latency-svc-t5zpn [2.356816114s] Jan 29 15:02:37.966: INFO: Got endpoints: latency-svc-fstr6 [2.143253729s] Jan 29 15:02:37.967: INFO: Got endpoints: latency-svc-7bc6m [2.068227208s] Jan 29 15:02:37.974: INFO: Got endpoints: latency-svc-bdprw [2.232691939s] Jan 29 15:02:37.975: INFO: Got endpoints: latency-svc-9wkdb [2.497029959s] Jan 29 15:02:38.024: INFO: Got endpoints: latency-svc-7nffg [1.974489545s] Jan 29 15:02:38.065: INFO: Created: latency-svc-nwj4x Jan 29 15:02:38.070: INFO: Got endpoints: latency-svc-jgdl9 [633.716358ms] Jan 29 15:02:38.070: INFO: Got endpoints: latency-svc-r7sl7 [2.457334386s] Jan 29 15:02:38.106: INFO: Got endpoints: latency-svc-nwj4x [584.825464ms] Jan 29 15:02:38.123: INFO: Got endpoints: latency-svc-5x4mk [630.160544ms] Jan 29 15:02:38.181: INFO: Created: latency-svc-gpm5b Jan 29 15:02:38.224: INFO: Got endpoints: latency-svc-gpm5b [487.157174ms] Jan 29 15:02:38.310: INFO: Created: latency-svc-jmvdv Jan 29 15:02:38.365: INFO: Got endpoints: latency-svc-jmvdv [626.574351ms] Jan 29 15:02:38.440: INFO: Created: latency-svc-rjc8k Jan 29 15:02:38.467: INFO: Got endpoints: latency-svc-rjc8k [695.601343ms] Jan 29 15:02:38.550: INFO: Created: latency-svc-mnbth Jan 29 15:02:38.635: INFO: Created: latency-svc-dtftz Jan 29 15:02:38.649: INFO: Got endpoints: latency-svc-mnbth [877.601942ms] Jan 29 15:02:38.669: INFO: Got endpoints: latency-svc-dtftz [930.87879ms] Jan 29 15:02:38.728: INFO: Created: latency-svc-n6t6z Jan 29 15:02:38.781: INFO: Got endpoints: latency-svc-n6t6z [814.623623ms] Jan 29 15:02:38.798: INFO: Created: latency-svc-sd9lw Jan 29 15:02:38.806: INFO: Got endpoints: latency-svc-sd9lw [839.191465ms] Jan 29 15:02:38.982: INFO: Created: latency-svc-2trc9 Jan 29 15:02:39.044: INFO: Got endpoints: latency-svc-2trc9 [1.069684331s] Jan 29 15:02:39.048: INFO: Created: latency-svc-jlqz9 Jan 29 15:02:39.076: INFO: Got endpoints: latency-svc-jlqz9 [1.101159432s] Jan 29 15:02:39.093: INFO: Created: latency-svc-wl9pc Jan 29 15:02:39.106: INFO: Got endpoints: latency-svc-wl9pc [1.132318413s] Jan 29 15:02:39.257: INFO: Created: latency-svc-k67ns Jan 29 15:02:39.281: INFO: Got endpoints: latency-svc-k67ns [1.243414728s] Jan 29 15:02:39.408: INFO: Created: latency-svc-pthzv Jan 29 15:02:39.451: INFO: Got endpoints: latency-svc-pthzv [1.38081059s] Jan 29 15:02:39.457: INFO: Created: latency-svc-67tz8 Jan 29 15:02:39.487: INFO: Got endpoints: latency-svc-67tz8 [1.417194806s] Jan 29 15:02:39.527: INFO: Created: latency-svc-p77fg Jan 29 15:02:39.537: INFO: Got endpoints: latency-svc-p77fg [1.43174322s] Jan 29 15:02:39.574: INFO: Created: latency-svc-78x6w Jan 29 15:02:39.636: INFO: Got endpoints: latency-svc-78x6w [1.511987513s] Jan 29 15:02:39.619: INFO: Created: latency-svc-s6s8s Jan 29 15:02:39.636: INFO: Got endpoints: latency-svc-s6s8s [1.411195448s] Jan 29 15:02:39.668: INFO: Created: latency-svc-krp47 Jan 29 15:02:39.668: INFO: Got endpoints: latency-svc-krp47 [1.303406712s] Jan 29 15:02:39.699: INFO: Created: latency-svc-fplg6 Jan 29 15:02:39.758: INFO: Got endpoints: latency-svc-fplg6 [1.290494463s] Jan 29 15:02:39.765: INFO: Created: latency-svc-b6nn8 Jan 29 15:02:39.789: INFO: Got endpoints: latency-svc-b6nn8 [1.139609564s] Jan 29 15:02:39.815: INFO: Created: latency-svc-jncb4 Jan 29 15:02:39.849: INFO: Got endpoints: latency-svc-jncb4 [1.16594453s] Jan 29 15:02:39.855: INFO: Created: latency-svc-zzgv5 Jan 29 15:02:39.892: INFO: Got endpoints: latency-svc-zzgv5 [1.111261664s] Jan 29 15:02:39.936: INFO: Created: latency-svc-4ppgk Jan 29 15:02:39.975: INFO: Got endpoints: latency-svc-4ppgk [1.169303006s] Jan 29 15:02:39.982: INFO: Created: latency-svc-lhh62 Jan 29 15:02:39.998: INFO: Got endpoints: latency-svc-lhh62 [953.760443ms] Jan 29 15:02:40.035: INFO: Created: latency-svc-vhf65 Jan 29 15:02:40.058: INFO: Got endpoints: latency-svc-vhf65 [982.618061ms] Jan 29 15:02:40.132: INFO: Created: latency-svc-87wnh Jan 29 15:02:40.174: INFO: Got endpoints: latency-svc-87wnh [1.067534261s] Jan 29 15:02:40.214: INFO: Created: latency-svc-xfr4r Jan 29 15:02:40.263: INFO: Got endpoints: latency-svc-xfr4r [982.214389ms] Jan 29 15:02:40.317: INFO: Created: latency-svc-vfcw5 Jan 29 15:02:40.346: INFO: Got endpoints: latency-svc-vfcw5 [880.395008ms] Jan 29 15:02:40.362: INFO: Created: latency-svc-2mmnt Jan 29 15:02:40.521: INFO: Got endpoints: latency-svc-2mmnt [1.010113022s] Jan 29 15:02:40.546: INFO: Created: latency-svc-7nwhp Jan 29 15:02:40.559: INFO: Got endpoints: latency-svc-7nwhp [1.021530589s] Jan 29 15:02:40.722: INFO: Created: latency-svc-wqkwg Jan 29 15:02:40.804: INFO: Got endpoints: latency-svc-wqkwg [1.168325062s] Jan 29 15:02:40.815: INFO: Created: latency-svc-kcs8q Jan 29 15:02:40.857: INFO: Got endpoints: latency-svc-kcs8q [1.221501449s] Jan 29 15:02:40.872: INFO: Created: latency-svc-rjxjk Jan 29 15:02:40.887: INFO: Got endpoints: latency-svc-rjxjk [1.219121031s] Jan 29 15:02:40.902: INFO: Created: latency-svc-ngq9n Jan 29 15:02:40.917: INFO: Got endpoints: latency-svc-ngq9n [1.142649053s] Jan 29 15:02:40.939: INFO: Created: latency-svc-bx89s Jan 29 15:02:40.942: INFO: Got endpoints: latency-svc-bx89s [1.149827838s] Jan 29 15:02:40.990: INFO: Created: latency-svc-wvpjr Jan 29 15:02:40.999: INFO: Created: latency-svc-dpr89 Jan 29 15:02:41.042: INFO: Got endpoints: latency-svc-dpr89 [1.150111591s] Jan 29 15:02:41.042: INFO: Got endpoints: latency-svc-wvpjr [1.172468459s] Jan 29 15:02:41.076: INFO: Created: latency-svc-mms6p Jan 29 15:02:41.111: INFO: Got endpoints: latency-svc-mms6p [1.136177172s] Jan 29 15:02:41.144: INFO: Created: latency-svc-tmmxs Jan 29 15:02:41.186: INFO: Got endpoints: latency-svc-tmmxs [1.185885369s] Jan 29 15:02:41.239: INFO: Created: latency-svc-7dgmp Jan 29 15:02:41.295: INFO: Got endpoints: latency-svc-7dgmp [1.23622495s] Jan 29 15:02:41.311: INFO: Created: latency-svc-dhnlv Jan 29 15:02:41.424: INFO: Got endpoints: latency-svc-dhnlv [1.22362441s] Jan 29 15:02:41.485: INFO: Created: latency-svc-jc95w Jan 29 15:02:41.588: INFO: Got endpoints: latency-svc-jc95w [1.324575951s] Jan 29 15:02:41.742: INFO: Created: latency-svc-jf4p9 Jan 29 15:02:41.785: INFO: Created: latency-svc-hzk26 Jan 29 15:02:41.794: INFO: Got endpoints: latency-svc-jf4p9 [1.448245894s] Jan 29 15:02:41.837: INFO: Got endpoints: latency-svc-hzk26 [1.315565642s] Jan 29 15:02:41.894: INFO: Created: latency-svc-vqmc6 Jan 29 15:02:41.995: INFO: Got endpoints: latency-svc-vqmc6 [1.432009858s] Jan 29 15:02:42.159: INFO: Created: latency-svc-bqz97 Jan 29 15:02:42.214: INFO: Got endpoints: latency-svc-bqz97 [1.410309714s] Jan 29 15:02:42.330: INFO: Created: latency-svc-2hgtb Jan 29 15:02:42.387: INFO: Got endpoints: latency-svc-2hgtb [1.530157382s] Jan 29 15:02:42.471: INFO: Created: latency-svc-8wwgk Jan 29 15:02:42.532: INFO: Got endpoints: latency-svc-8wwgk [1.644887156s] Jan 29 15:02:42.571: INFO: Created: latency-svc-cdggf Jan 29 15:02:42.598: INFO: Got endpoints: latency-svc-cdggf [1.680814549s] Jan 29 15:02:42.629: INFO: Created: latency-svc-fgz4q Jan 29 15:02:42.679: INFO: Got endpoints: latency-svc-fgz4q [1.737133537s] Jan 29 15:02:42.716: INFO: Created: latency-svc-mzg9z Jan 29 15:02:42.744: INFO: Got endpoints: latency-svc-mzg9z [1.701481133s] Jan 29 15:02:42.758: INFO: Created: latency-svc-k285m Jan 29 15:02:42.774: INFO: Got endpoints: latency-svc-k285m [1.732075225s] Jan 29 15:02:42.878: INFO: Created: latency-svc-48d7t Jan 29 15:02:42.940: INFO: Got endpoints: latency-svc-48d7t [1.828434992s] Jan 29 15:02:42.972: INFO: Created: latency-svc-hf4bv Jan 29 15:02:42.985: INFO: Created: latency-svc-h59tf Jan 29 15:02:43.014: INFO: Got endpoints: latency-svc-hf4bv [1.827645168s] Jan 29 15:02:43.014: INFO: Got endpoints: latency-svc-h59tf [1.718966213s] Jan 29 15:02:43.091: INFO: Created: latency-svc-rr7kk Jan 29 15:02:43.171: INFO: Got endpoints: latency-svc-rr7kk [1.747299607s] Jan 29 15:02:43.269: INFO: Created: latency-svc-6z2z5 Jan 29 15:02:43.372: INFO: Got endpoints: latency-svc-6z2z5 [1.784456483s] Jan 29 15:02:43.400: INFO: Created: latency-svc-kvknb Jan 29 15:02:43.511: INFO: Got endpoints: latency-svc-kvknb [1.716424632s] Jan 29 15:02:43.646: INFO: Created: latency-svc-tzh6w Jan 29 15:02:43.734: INFO: Got endpoints: latency-svc-tzh6w [1.897559571s] Jan 29 15:02:43.817: INFO: Created: latency-svc-bcbnh Jan 29 15:02:43.881: INFO: Got endpoints: latency-svc-bcbnh [1.886156272s] Jan 29 15:02:43.964: INFO: Created: latency-svc-c7gxb Jan 29 15:02:44.013: INFO: Created: latency-svc-hkl6m Jan 29 15:02:44.027: INFO: Got endpoints: latency-svc-c7gxb [1.812879326s] Jan 29 15:02:44.048: INFO: Got endpoints: latency-svc-hkl6m [1.660016208s] Jan 29 15:02:44.112: INFO: Created: latency-svc-jzrnr Jan 29 15:02:44.213: INFO: Got endpoints: latency-svc-jzrnr [1.680424697s] Jan 29 15:02:44.218: INFO: Created: latency-svc-vv49q Jan 29 15:02:44.274: INFO: Created: latency-svc-4rn2b Jan 29 15:02:44.276: INFO: Got endpoints: latency-svc-vv49q [1.677997148s] Jan 29 15:02:44.304: INFO: Got endpoints: latency-svc-4rn2b [1.624933889s] Jan 29 15:02:44.383: INFO: Created: latency-svc-5s28w Jan 29 15:02:44.386: INFO: Got endpoints: latency-svc-5s28w [1.6425278s] Jan 29 15:02:44.386: INFO: Latencies: [69.887663ms 75.873716ms 80.649192ms 111.822485ms 122.934992ms 136.008142ms 137.885889ms 161.829473ms 174.100392ms 178.67954ms 191.375939ms 207.050396ms 215.653788ms 245.50474ms 263.857239ms 295.038747ms 311.954248ms 315.935286ms 339.361434ms 340.686912ms 382.963498ms 403.643525ms 405.402883ms 406.663706ms 407.618442ms 410.942607ms 412.263125ms 415.545054ms 440.557203ms 445.751763ms 466.212004ms 487.157174ms 488.286595ms 489.303541ms 497.641742ms 501.052472ms 503.28155ms 521.323343ms 528.479824ms 529.617741ms 529.809508ms 539.944165ms 540.531447ms 546.474331ms 546.719179ms 568.511977ms 572.757029ms 573.51868ms 583.837703ms 584.825464ms 586.445597ms 587.526202ms 596.219383ms 597.722693ms 604.553784ms 620.02994ms 626.574351ms 630.160544ms 633.716358ms 644.906501ms 695.601343ms 705.008382ms 747.90185ms 774.86938ms 814.623623ms 820.573147ms 835.859814ms 839.191465ms 852.439427ms 870.574291ms 875.453579ms 876.059299ms 876.864904ms 877.601942ms 880.395008ms 894.291362ms 912.150072ms 917.963428ms 930.87879ms 932.503329ms 933.057356ms 953.760443ms 960.618789ms 964.510945ms 965.74083ms 982.214389ms 982.618061ms 1.010113022s 1.021530589s 1.067534261s 1.069684331s 1.089092115s 1.101159432s 1.105135196s 1.111261664s 1.120812872s 1.129554313s 1.132318413s 1.136177172s 1.137450591s 1.139609564s 1.142649053s 1.148106786s 1.149313711s 1.149827838s 1.150111591s 1.151604044s 1.16594453s 1.168325062s 1.169303006s 1.170933492s 1.172468459s 1.177006977s 1.18557542s 1.185885369s 1.197141881s 1.211237036s 1.217485535s 1.219121031s 1.221501449s 1.22362441s 1.23588708s 1.23622495s 1.243414728s 1.290494463s 1.294542059s 1.303406712s 1.315565642s 1.324575951s 1.325679329s 1.342921018s 1.351144909s 1.38081059s 1.389697038s 1.410309714s 1.411195448s 1.417194806s 1.43174322s 1.432009858s 1.448245894s 1.511987513s 1.530157382s 1.570363301s 1.594891068s 1.624933889s 1.6425278s 1.644887156s 1.660016208s 1.677997148s 1.680424697s 1.680814549s 1.701481133s 1.716424632s 1.718966213s 1.732075225s 1.737133537s 1.747299607s 1.784456483s 1.812879326s 1.823052892s 1.827645168s 1.828434992s 1.865663495s 1.886156272s 1.897559571s 1.974489545s 1.9910122s 2.060304433s 2.068227208s 2.075801427s 2.143253729s 2.232691939s 2.32592351s 2.356816114s 2.457334386s 2.48838444s 2.489580208s 2.496558267s 2.497029959s 2.501596741s 2.593871201s 2.692632338s 2.958577349s 3.351139028s 3.351328813s 3.941987652s 3.985686843s 4.3242914s 4.554908831s 4.63110021s 4.711886938s 5.014465474s 5.022177171s 5.239341342s 5.364687737s 5.391963634s 5.446157209s 5.549992549s 5.656192279s 5.776618526s] Jan 29 15:02:44.387: INFO: 50 %ile: 1.139609564s Jan 29 15:02:44.387: INFO: 90 %ile: 2.593871201s Jan 29 15:02:44.387: INFO: 99 %ile: 5.656192279s Jan 29 15:02:44.387: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:44.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svc-latency-1339" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":5,"skipped":93,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:44.628: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should delete a collection of services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a collection of services Jan 29 15:02:44.739: INFO: Creating e2e-svc-a-9zt7g Jan 29 15:02:44.865: INFO: Creating e2e-svc-b-drt87 Jan 29 15:02:44.941: INFO: Creating e2e-svc-c-twsq2 �[1mSTEP�[0m: deleting service collection Jan 29 15:02:45.151: INFO: Collection of services has been deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:45.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3575" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":6,"skipped":124,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:45.231: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of events Jan 29 15:02:45.318: INFO: created test-event-1 Jan 29 15:02:45.346: INFO: created test-event-2 Jan 29 15:02:45.362: INFO: created test-event-3 �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete collection of events Jan 29 15:02:45.392: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity Jan 29 15:02:45.809: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:45.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-7003" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":7,"skipped":128,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:27.806: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc1 �[1mSTEP�[0m: create the rc2 �[1mSTEP�[0m: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[1mSTEP�[0m: delete the rc simpletest-rc-to-be-deleted �[1mSTEP�[0m: wait for the rc to be deleted Jan 29 15:02:46.325: INFO: 70 pods remaining Jan 29 15:02:46.325: INFO: 70 pods has nil DeletionTimestamp Jan 29 15:02:46.325: INFO: �[1mSTEP�[0m: Gathering metrics Jan 29 15:02:51.275: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-pw1vby-8nwgl-sl9bk is Running (Ready = true) Jan 29 15:02:51.339: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 29 15:02:51.339: INFO: Deleting pod "simpletest-rc-to-be-deleted-25hz5" in namespace "gc-4988" Jan 29 15:02:51.351: INFO: Deleting pod "simpletest-rc-to-be-deleted-2z49n" in namespace "gc-4988" Jan 29 15:02:51.366: INFO: Deleting pod "simpletest-rc-to-be-deleted-42dh6" in namespace "gc-4988" Jan 29 15:02:51.379: INFO: Deleting pod "simpletest-rc-to-be-deleted-4brmd" in namespace "gc-4988" Jan 29 15:02:51.399: INFO: Deleting pod "simpletest-rc-to-be-deleted-4v4fp" in namespace "gc-4988" Jan 29 15:02:51.420: INFO: Deleting pod "simpletest-rc-to-be-deleted-4v86b" in namespace "gc-4988" Jan 29 15:02:51.479: INFO: Deleting pod "simpletest-rc-to-be-deleted-555nf" in namespace "gc-4988" Jan 29 15:02:51.506: INFO: Deleting pod "simpletest-rc-to-be-deleted-5g76w" in namespace "gc-4988" Jan 29 15:02:51.538: INFO: Deleting pod "simpletest-rc-to-be-deleted-69w58" in namespace "gc-4988" Jan 29 15:02:51.569: INFO: Deleting pod "simpletest-rc-to-be-deleted-6b8k9" in namespace "gc-4988" Jan 29 15:02:51.655: INFO: Deleting pod "simpletest-rc-to-be-deleted-6lkpd" in namespace "gc-4988" Jan 29 15:02:51.716: INFO: Deleting pod "simpletest-rc-to-be-deleted-6n6gt" in namespace "gc-4988" Jan 29 15:02:51.786: INFO: Deleting pod "simpletest-rc-to-be-deleted-6n984" in namespace "gc-4988" Jan 29 15:02:51.851: INFO: Deleting pod "simpletest-rc-to-be-deleted-6nnnq" in namespace "gc-4988" Jan 29 15:02:51.956: INFO: Deleting pod "simpletest-rc-to-be-deleted-6ss9q" in namespace "gc-4988" Jan 29 15:02:51.985: INFO: Deleting pod "simpletest-rc-to-be-deleted-78kms" in namespace "gc-4988" Jan 29 15:02:52.017: INFO: Deleting pod "simpletest-rc-to-be-deleted-7d9d6" in namespace "gc-4988" Jan 29 15:02:52.042: INFO: Deleting pod "simpletest-rc-to-be-deleted-7pkjl" in namespace "gc-4988" Jan 29 15:02:52.062: INFO: Deleting pod "simpletest-rc-to-be-deleted-7sk4c" in namespace "gc-4988" Jan 29 15:02:52.103: INFO: Deleting pod "simpletest-rc-to-be-deleted-877gh" in namespace "gc-4988" Jan 29 15:02:52.142: INFO: Deleting pod "simpletest-rc-to-be-deleted-8cfn4" in namespace "gc-4988" Jan 29 15:02:52.181: INFO: Deleting pod "simpletest-rc-to-be-deleted-8n7kb" in namespace "gc-4988" Jan 29 15:02:52.243: INFO: Deleting pod "simpletest-rc-to-be-deleted-9h5w4" in namespace "gc-4988" Jan 29 15:02:52.286: INFO: Deleting pod "simpletest-rc-to-be-deleted-9lnm9" in namespace "gc-4988" Jan 29 15:02:52.359: INFO: Deleting pod "simpletest-rc-to-be-deleted-9th49" in namespace "gc-4988" Jan 29 15:02:52.394: INFO: Deleting pod "simpletest-rc-to-be-deleted-b7jhl" in namespace "gc-4988" Jan 29 15:02:52.462: INFO: Deleting pod "simpletest-rc-to-be-deleted-bbcnh" in namespace "gc-4988" Jan 29 15:02:52.511: INFO: Deleting pod "simpletest-rc-to-be-deleted-bhj9r" in namespace "gc-4988" Jan 29 15:02:52.562: INFO: Deleting pod "simpletest-rc-to-be-deleted-bjthc" in namespace "gc-4988" Jan 29 15:02:52.595: INFO: Deleting pod "simpletest-rc-to-be-deleted-bmk6d" in namespace "gc-4988" Jan 29 15:02:52.628: INFO: Deleting pod "simpletest-rc-to-be-deleted-bqs2v" in namespace "gc-4988" Jan 29 15:02:52.719: INFO: Deleting pod "simpletest-rc-to-be-deleted-bwd2l" in namespace "gc-4988" Jan 29 15:02:52.744: INFO: Deleting pod "simpletest-rc-to-be-deleted-bz97m" in namespace "gc-4988" Jan 29 15:02:52.859: INFO: Deleting pod "simpletest-rc-to-be-deleted-c4nrh" in namespace "gc-4988" Jan 29 15:02:52.967: INFO: Deleting pod "simpletest-rc-to-be-deleted-cd24z" in namespace "gc-4988" Jan 29 15:02:53.085: INFO: Deleting pod "simpletest-rc-to-be-deleted-dkrs8" in namespace "gc-4988" Jan 29 15:02:53.162: INFO: Deleting pod "simpletest-rc-to-be-deleted-dwp28" in namespace "gc-4988" Jan 29 15:02:53.245: INFO: Deleting pod "simpletest-rc-to-be-deleted-f2xhg" in namespace "gc-4988" Jan 29 15:02:53.288: INFO: Deleting pod "simpletest-rc-to-be-deleted-fhj2t" in namespace "gc-4988" Jan 29 15:02:53.341: INFO: Deleting pod "simpletest-rc-to-be-deleted-g4v7c" in namespace "gc-4988" Jan 29 15:02:53.372: INFO: Deleting pod "simpletest-rc-to-be-deleted-g5b4p" in namespace "gc-4988" Jan 29 15:02:53.392: INFO: Deleting pod "simpletest-rc-to-be-deleted-g9nkw" in namespace "gc-4988" Jan 29 15:02:53.425: INFO: Deleting pod "simpletest-rc-to-be-deleted-gp2bz" in namespace "gc-4988" Jan 29 15:02:53.455: INFO: Deleting pod "simpletest-rc-to-be-deleted-gxmlz" in namespace "gc-4988" Jan 29 15:02:53.537: INFO: Deleting pod "simpletest-rc-to-be-deleted-h5p57" in namespace "gc-4988" Jan 29 15:02:53.586: INFO: Deleting pod "simpletest-rc-to-be-deleted-h6pg8" in namespace "gc-4988" Jan 29 15:02:53.682: INFO: Deleting pod "simpletest-rc-to-be-deleted-hm4h8" in namespace "gc-4988" Jan 29 15:02:53.703: INFO: Deleting pod "simpletest-rc-to-be-deleted-hwb7h" in namespace "gc-4988" Jan 29 15:02:53.740: INFO: Deleting pod "simpletest-rc-to-be-deleted-hzkkg" in namespace "gc-4988" Jan 29 15:02:53.781: INFO: Deleting pod "simpletest-rc-to-be-deleted-jbzs6" in namespace "gc-4988" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:53.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-4988" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":6,"skipped":142,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:53.915: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override arguments Jan 29 15:02:54.028: INFO: Waiting up to 5m0s for pod "client-containers-b7857dcf-cbfd-4202-81c8-8cc2919a3112" in namespace "containers-6483" to be "Succeeded or Failed" Jan 29 15:02:54.044: INFO: Pod "client-containers-b7857dcf-cbfd-4202-81c8-8cc2919a3112": Phase="Pending", Reason="", readiness=false. Elapsed: 16.017109ms Jan 29 15:02:56.053: INFO: Pod "client-containers-b7857dcf-cbfd-4202-81c8-8cc2919a3112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02417047s Jan 29 15:02:58.057: INFO: Pod "client-containers-b7857dcf-cbfd-4202-81c8-8cc2919a3112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028678483s �[1mSTEP�[0m: Saw pod success Jan 29 15:02:58.057: INFO: Pod "client-containers-b7857dcf-cbfd-4202-81c8-8cc2919a3112" satisfied condition "Succeeded or Failed" Jan 29 15:02:58.060: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527 pod client-containers-b7857dcf-cbfd-4202-81c8-8cc2919a3112 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:02:58.075: INFO: Waiting for pod client-containers-b7857dcf-cbfd-4202-81c8-8cc2919a3112 to disappear Jan 29 15:02:58.077: INFO: Pod client-containers-b7857dcf-cbfd-4202-81c8-8cc2919a3112 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:02:58.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-6483" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":144,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:45.937: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-5182 �[1mSTEP�[0m: creating service affinity-nodeport in namespace services-5182 �[1mSTEP�[0m: creating replication controller affinity-nodeport in namespace services-5182 I0129 15:02:46.240666 20 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-5182, replica count: 3 I0129 15:02:49.292155 20 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 15:02:52.293048 20 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 15:02:55.293240 20 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 29 15:02:55.308: INFO: Creating new exec pod Jan 29 15:03:02.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5182 exec execpod-affinity28c8b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Jan 29 15:03:02.471: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Jan 29 15:03:02.471: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 29 15:03:02.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5182 exec execpod-affinity28c8b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.142.3.24 80' Jan 29 15:03:02.612: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.142.3.24 80\nConnection to 10.142.3.24 80 port [tcp/http] succeeded!\n" Jan 29 15:03:02.612: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 29 15:03:02.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5182 exec execpod-affinity28c8b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31829' Jan 29 15:03:02.754: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 31829\nConnection to 172.18.0.4 31829 port [tcp/*] succeeded!\n" Jan 29 15:03:02.754: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 29 15:03:02.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5182 exec execpod-affinity28c8b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 31829' Jan 29 15:03:02.900: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 31829\nConnection to 172.18.0.6 31829 port [tcp/*] succeeded!\n" Jan 29 15:03:02.900: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 29 15:03:02.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5182 exec execpod-affinity28c8b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.4:31829/ ; done' Jan 29 15:03:03.138: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:31829/\n" Jan 29 15:03:03.138: INFO: stdout: "\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn\naffinity-nodeport-r4mvn" Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Received response from host: affinity-nodeport-r4mvn Jan 29 15:03:03.138: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport in namespace services-5182, will wait for the garbage collector to delete the pods Jan 29 15:03:03.206: INFO: Deleting ReplicationController affinity-nodeport took: 7.113638ms Jan 29 15:03:03.307: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.51677ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:05.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5182" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":130,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:58.111: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 29 15:02:58.534: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 29 15:03:01.563: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 29 15:03:02.563: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 29 15:03:03.563: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 29 15:03:04.563: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 29 15:03:05.563: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:03:05.567: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-5411-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource while v1 is storage version �[1mSTEP�[0m: Patching Custom Resource Definition to set v2 as storage �[1mSTEP�[0m: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:08.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1754" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1754-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:22.747: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:02:22.926: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 29 15:02:28.018: INFO: Pod name rollover-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 29 15:02:52.179: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 29 15:02:54.192: INFO: Creating deployment "test-rollover-deployment" Jan 29 15:02:54.215: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 29 15:02:56.225: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 29 15:02:56.237: INFO: Ensure that both replica sets have 1 created replica Jan 29 15:02:56.244: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 29 15:02:56.257: INFO: Updating deployment test-rollover-deployment Jan 29 15:02:56.257: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 29 15:02:58.265: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 29 15:02:58.271: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 29 15:02:58.276: INFO: all replica sets need to contain the pod-template-hash label Jan 29 15:02:58.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 2, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 29 15:03:00.283: INFO: all replica sets need to contain the pod-template-hash label Jan 29 15:03:00.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 3, 0, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 29 15:03:02.284: INFO: all replica sets need to contain the pod-template-hash label Jan 29 15:03:02.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 3, 0, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 29 15:03:04.286: INFO: all replica sets need to contain the pod-template-hash label Jan 29 15:03:04.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 3, 0, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 29 15:03:06.284: INFO: all replica sets need to contain the pod-template-hash label Jan 29 15:03:06.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 3, 0, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 29 15:03:08.288: INFO: all replica sets need to contain the pod-template-hash label Jan 29 15:03:08.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 3, 0, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 2, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 29 15:03:10.285: INFO: Jan 29 15:03:10.285: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 29 15:03:10.298: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9907 fb6f4002-51f6-4efe-bf59-c084a12caa1c 6625 2 2023-01-29 15:02:54 +0000 UTC <nil> <nil> map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-29 15:02:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b92de8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-29 15:02:54 +0000 UTC,LastTransitionTime:2023-01-29 15:02:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-77db6f9f48" has successfully progressed.,LastUpdateTime:2023-01-29 15:03:10 +0000 UTC,LastTransitionTime:2023-01-29 15:02:54 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 29 15:03:10.304: INFO: New ReplicaSet "test-rollover-deployment-77db6f9f48" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-77db6f9f48 deployment-9907 426ef3d7-5610-409f-bca1-3ae49d9a7f3e 6614 2 2023-01-29 15:02:56 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:77db6f9f48] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment fb6f4002-51f6-4efe-bf59-c084a12caa1c 0xc0043bc257 0xc0043bc258}] [] [{kube-controller-manager Update apps/v1 2023-01-29 15:02:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f4002-51f6-4efe-bf59-c084a12caa1c\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:03:10 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 77db6f9f48,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:77db6f9f48] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0043bc308 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 29 15:03:10.304: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 29 15:03:10.304: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9907 5797cabb-d003-464b-87c1-3b74f39af8c9 6624 2 2023-01-29 15:02:22 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment fb6f4002-51f6-4efe-bf59-c084a12caa1c 0xc0043bc12f 0xc0043bc140}] [] [{e2e.test Update apps/v1 2023-01-29 15:02:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f4002-51f6-4efe-bf59-c084a12caa1c\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:03:10 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0043bc1f8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 29 15:03:10.304: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-784bc44b77 deployment-9907 2be5ad32-7827-40e3-a63b-0e1f6e19a73d 5789 2 2023-01-29 15:02:54 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment fb6f4002-51f6-4efe-bf59-c084a12caa1c 0xc0043bc367 0xc0043bc368}] [] [{kube-controller-manager Update apps/v1 2023-01-29 15:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f4002-51f6-4efe-bf59-c084a12caa1c\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:02:56 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 784bc44b77,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0043bc418 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 29 15:03:10.307: INFO: Pod "test-rollover-deployment-77db6f9f48-rp8vm" is available: &Pod{ObjectMeta:{test-rollover-deployment-77db6f9f48-rp8vm test-rollover-deployment-77db6f9f48- deployment-9907 56f9250d-2987-4795-8d7b-b8f62f757bb2 6301 0 2023-01-29 15:02:56 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:77db6f9f48] map[] [{apps/v1 ReplicaSet test-rollover-deployment-77db6f9f48 426ef3d7-5610-409f-bca1-3ae49d9a7f3e 0xc0043bc947 0xc0043bc948}] [] [{kube-controller-manager Update v1 2023-01-29 15:02:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"426ef3d7-5610-409f-bca1-3ae49d9a7f3e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:03:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h7qmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h7qmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:02:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:02:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:02:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:02:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.40,StartTime:2023-01-29 15:02:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:02:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://5bfee7cfa21157f539c6e499926a80ab35326537dd6995b3bd0d3e0c895e974f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:10.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9907" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":9,"skipped":208,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:05.362: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: updating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: patching the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:11.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-4527" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":9,"skipped":145,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:11.473: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ConfigMap �[1mSTEP�[0m: fetching the ConfigMap �[1mSTEP�[0m: patching the ConfigMap �[1mSTEP�[0m: listing all ConfigMaps in all namespaces with a label selector �[1mSTEP�[0m: deleting the ConfigMap by collection with a label selector �[1mSTEP�[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:11.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3682" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":10,"skipped":164,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:10.324: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-b8e1b6ee-bdd1-4a5a-bd8e-23f33d88446f �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 29 15:03:10.359: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b90d53f-48c8-484f-9b64-08b03eba2bea" in namespace "projected-5143" to be "Succeeded or Failed" Jan 29 15:03:10.366: INFO: Pod "pod-projected-configmaps-7b90d53f-48c8-484f-9b64-08b03eba2bea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151659ms Jan 29 15:03:12.371: INFO: Pod "pod-projected-configmaps-7b90d53f-48c8-484f-9b64-08b03eba2bea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011736554s Jan 29 15:03:14.376: INFO: Pod "pod-projected-configmaps-7b90d53f-48c8-484f-9b64-08b03eba2bea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01619306s �[1mSTEP�[0m: Saw pod success Jan 29 15:03:14.376: INFO: Pod "pod-projected-configmaps-7b90d53f-48c8-484f-9b64-08b03eba2bea" satisfied condition "Succeeded or Failed" Jan 29 15:03:14.378: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-projected-configmaps-7b90d53f-48c8-484f-9b64-08b03eba2bea container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:03:14.396: INFO: Waiting for pod pod-projected-configmaps-7b90d53f-48c8-484f-9b64-08b03eba2bea to disappear Jan 29 15:03:14.398: INFO: Pod pod-projected-configmaps-7b90d53f-48c8-484f-9b64-08b03eba2bea no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:14.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5143" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":213,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:02:07.724: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-51d2845c-e34f-4385-b930-63ad08f89e64 �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-063dc5a8-31b8-489f-9b94-fb73588d1782 �[1mSTEP�[0m: Creating the pod Jan 29 15:02:07.793: INFO: The status of Pod pod-projected-secrets-03e45d8b-18f7-4296-95dd-de1fc6c77ade is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:02:09.800: INFO: The status of Pod pod-projected-secrets-03e45d8b-18f7-4296-95dd-de1fc6c77ade is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:02:11.800: INFO: The status of Pod pod-projected-secrets-03e45d8b-18f7-4296-95dd-de1fc6c77ade is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-51d2845c-e34f-4385-b930-63ad08f89e64 �[1mSTEP�[0m: Updating secret s-test-opt-upd-063dc5a8-31b8-489f-9b94-fb73588d1782 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-69d530ed-70e5-4b53-afce-25b6923658e7 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:20.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8225" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:14.486: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:03:14.504: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 29 15:03:14.512: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 29 15:03:19.516: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 29 15:03:19.516: INFO: Creating deployment "test-rolling-update-deployment" Jan 29 15:03:19.520: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 29 15:03:19.526: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 29 15:03:21.534: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 29 15:03:21.538: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 29 15:03:21.548: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-748 e077f4d6-5988-45e6-bc6c-f943b50c2944 6871 1 2023-01-29 15:03:19 +0000 UTC <nil> <nil> map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-01-29 15:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003abc628 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-29 15:03:19 +0000 UTC,LastTransitionTime:2023-01-29 15:03:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-8656fc4b57" has successfully progressed.,LastUpdateTime:2023-01-29 15:03:20 +0000 UTC,LastTransitionTime:2023-01-29 15:03:19 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 29 15:03:21.551: INFO: New ReplicaSet "test-rolling-update-deployment-8656fc4b57" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-8656fc4b57 deployment-748 2a2582dd-8562-44ce-9180-b9593f7e085a 6861 1 2023-01-29 15:03:19 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment e077f4d6-5988-45e6-bc6c-f943b50c2944 0xc003abcaf7 0xc003abcaf8}] [] [{kube-controller-manager Update apps/v1 2023-01-29 15:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e077f4d6-5988-45e6-bc6c-f943b50c2944\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:03:20 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 8656fc4b57,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003abcba8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 29 15:03:21.551: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 29 15:03:21.552: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-748 aae46e9f-3116-4ba3-b955-e51895d3d2d2 6870 2 2023-01-29 15:03:14 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment e077f4d6-5988-45e6-bc6c-f943b50c2944 0xc003abc9cf 0xc003abc9e0}] [] [{e2e.test Update apps/v1 2023-01-29 15:03:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e077f4d6-5988-45e6-bc6c-f943b50c2944\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:03:20 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003abca98 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 29 15:03:21.556: INFO: Pod "test-rolling-update-deployment-8656fc4b57-nvwbm" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-8656fc4b57-nvwbm test-rolling-update-deployment-8656fc4b57- deployment-748 d7f971fb-d682-4803-a24c-233e28edeff2 6860 0 2023-01-29 15:03:19 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-8656fc4b57 2a2582dd-8562-44ce-9180-b9593f7e085a 0xc003c79707 0xc003c79708}] [] [{kube-controller-manager Update v1 2023-01-29 15:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a2582dd-8562-44ce-9180-b9593f7e085a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.33\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vpbwz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vpbwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-biy623,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:03:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:03:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.33,StartTime:2023-01-29 15:03:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:03:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://cc3af1d44a72c792e71375e0f3ad218d1dbccb60bbc45a18848d12d89ecb7af7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:21.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-748" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":11,"skipped":273,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:21.639: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a secret �[1mSTEP�[0m: listing secrets in all namespaces to ensure that there are more than zero �[1mSTEP�[0m: patching the secret �[1mSTEP�[0m: deleting the secret using a LabelSelector �[1mSTEP�[0m: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:21.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-3892" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":12,"skipped":312,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:20.659: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Jan 29 15:03:20.957: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 29 15:03:20.975: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 29 15:03:23.996: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a validating webhook configuration �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Updating a validating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Patching a validating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:24.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6916" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6916-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":4,"skipped":65,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:21.717: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 29 15:03:21.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3830 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Jan 29 15:03:21.830: INFO: stderr: "" Jan 29 15:03:21.830: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: replace the image in the pod with server-side dry-run Jan 29 15:03:21.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3830 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' Jan 29 15:03:22.850: INFO: stderr: "" Jan 29 15:03:22.850: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 29 15:03:22.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3830 delete pods e2e-test-httpd-pod' Jan 29 15:03:24.301: INFO: stderr: "" Jan 29 15:03:24.301: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:24.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3830" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":13,"skipped":324,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:24.355: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:03:24.389: INFO: The status of Pod busybox-host-aliasesaa268654-0e60-4cd4-8967-316146abc0f8 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:03:26.393: INFO: The status of Pod busybox-host-aliasesaa268654-0e60-4cd4-8967-316146abc0f8 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:26.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-7634" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":359,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:11.552: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 29 15:03:12.018: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 29 15:03:15.040: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Setting timeout (1s) shorter than webhook latency (5s) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s) �[1mSTEP�[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is longer than webhook latency �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is empty (defaulted to 10s in v1) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:27.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3818" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3818-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":11,"skipped":178,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:24.113: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status captures replication controller creation �[1mSTEP�[0m: Deleting a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:35.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-4155" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":5,"skipped":66,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:27.303: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Service �[1mSTEP�[0m: Creating a NodePort Service �[1mSTEP�[0m: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota �[1mSTEP�[0m: Ensuring resource quota status captures service creation �[1mSTEP�[0m: Deleting Services �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:38.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-3905" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":12,"skipped":184,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":8,"skipped":162,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:08.830: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:03:08.914: INFO: created pod Jan 29 15:03:08.915: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-4415" to be "Succeeded or Failed" Jan 29 15:03:08.923: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.347053ms Jan 29 15:03:10.928: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013010283s Jan 29 15:03:12.933: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01745542s �[1mSTEP�[0m: Saw pod success Jan 29 15:03:12.933: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Jan 29 15:03:42.933: INFO: polling logs Jan 29 15:03:42.939: INFO: Pod logs: I0129 15:03:09.570790 1 log.go:195] OK: Got token I0129 15:03:09.570837 1 log.go:195] validating with in-cluster discovery I0129 15:03:09.571281 1 log.go:195] OK: got issuer https://kubernetes.default.svc.cluster.local I0129 15:03:09.571318 1 log.go:195] Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-4415:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1675005189, NotBefore:1675004589, IssuedAt:1675004589, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-4415", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"524ce7c5-5614-4a1e-bbd1-b361f110f79a"}}} I0129 15:03:09.616850 1 log.go:195] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local I0129 15:03:09.624294 1 log.go:195] OK: Validated signature on JWT I0129 15:03:09.624433 1 log.go:195] OK: Got valid claims from token! I0129 15:03:09.624485 1 log.go:195] Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-4415:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1675005189, NotBefore:1675004589, IssuedAt:1675004589, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-4415", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"524ce7c5-5614-4a1e-bbd1-b361f110f79a"}}} Jan 29 15:03:42.939: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:42.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-4415" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":9,"skipped":162,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:38.776: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test substitution in container's command Jan 29 15:03:38.806: INFO: Waiting up to 5m0s for pod "var-expansion-30fe8264-aa83-4652-9df4-023646721cc6" in namespace "var-expansion-4091" to be "Succeeded or Failed" Jan 29 15:03:38.809: INFO: Pod "var-expansion-30fe8264-aa83-4652-9df4-023646721cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.817921ms Jan 29 15:03:40.812: INFO: Pod "var-expansion-30fe8264-aa83-4652-9df4-023646721cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006511889s Jan 29 15:03:42.817: INFO: Pod "var-expansion-30fe8264-aa83-4652-9df4-023646721cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010853782s Jan 29 15:03:44.821: INFO: Pod "var-expansion-30fe8264-aa83-4652-9df4-023646721cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015025967s Jan 29 15:03:46.826: INFO: Pod "var-expansion-30fe8264-aa83-4652-9df4-023646721cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019610812s Jan 29 15:03:48.829: INFO: Pod "var-expansion-30fe8264-aa83-4652-9df4-023646721cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023148947s Jan 29 15:03:50.833: INFO: Pod "var-expansion-30fe8264-aa83-4652-9df4-023646721cc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.027318127s �[1mSTEP�[0m: Saw pod success Jan 29 15:03:50.833: INFO: Pod "var-expansion-30fe8264-aa83-4652-9df4-023646721cc6" satisfied condition "Succeeded or Failed" Jan 29 15:03:50.836: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod var-expansion-30fe8264-aa83-4652-9df4-023646721cc6 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:03:50.852: INFO: Waiting for pod var-expansion-30fe8264-aa83-4652-9df4-023646721cc6 to disappear Jan 29 15:03:50.854: INFO: Pod var-expansion-30fe8264-aa83-4652-9df4-023646721cc6 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:50.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-4091" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":204,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:42.966: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 29 15:03:42.992: INFO: Waiting up to 5m0s for pod "pod-1882ae37-cf57-4ea2-bea2-e109dc27def5" in namespace "emptydir-7404" to be "Succeeded or Failed" Jan 29 15:03:42.995: INFO: Pod "pod-1882ae37-cf57-4ea2-bea2-e109dc27def5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.84778ms Jan 29 15:03:44.999: INFO: Pod "pod-1882ae37-cf57-4ea2-bea2-e109dc27def5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006353129s Jan 29 15:03:47.002: INFO: Pod "pod-1882ae37-cf57-4ea2-bea2-e109dc27def5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009868311s Jan 29 15:03:49.006: INFO: Pod "pod-1882ae37-cf57-4ea2-bea2-e109dc27def5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014257449s Jan 29 15:03:51.013: INFO: Pod "pod-1882ae37-cf57-4ea2-bea2-e109dc27def5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020744885s Jan 29 15:03:53.017: INFO: Pod "pod-1882ae37-cf57-4ea2-bea2-e109dc27def5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.025254713s �[1mSTEP�[0m: Saw pod success Jan 29 15:03:53.018: INFO: Pod "pod-1882ae37-cf57-4ea2-bea2-e109dc27def5" satisfied condition "Succeeded or Failed" Jan 29 15:03:53.022: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-1882ae37-cf57-4ea2-bea2-e109dc27def5 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:03:53.040: INFO: Waiting for pod pod-1882ae37-cf57-4ea2-bea2-e109dc27def5 to disappear Jan 29 15:03:53.043: INFO: Pod pod-1882ae37-cf57-4ea2-bea2-e109dc27def5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:53.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7404" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":172,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:50.870: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:03:54.950: INFO: Deleting pod "var-expansion-f64c150c-17ae-4c7f-8234-ffd4a6d8a7c6" in namespace "var-expansion-5984" Jan 29 15:03:54.957: INFO: Wait up to 5m0s for pod "var-expansion-f64c150c-17ae-4c7f-8234-ffd4a6d8a7c6" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:03:56.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-5984" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":14,"skipped":207,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:35.584: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:03.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-2258" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":81,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:04.032: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 29 15:04:04.744: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 29 15:04:07.847: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io/v1 discovery document �[1mSTEP�[0m: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:07.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-640" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-640-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":7,"skipped":180,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:26.417: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Jan 29 15:04:06.517: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-pw1vby-8nwgl-sl9bk is Running (Ready = true) Jan 29 15:04:06.582: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 29 15:04:06.582: INFO: Deleting pod "simpletest.rc-27hcn" in namespace "gc-3736" Jan 29 15:04:06.592: INFO: Deleting pod "simpletest.rc-29mr2" in namespace "gc-3736" Jan 29 15:04:06.600: INFO: Deleting pod "simpletest.rc-2fz9d" in namespace "gc-3736" Jan 29 15:04:06.611: INFO: Deleting pod "simpletest.rc-2jr7g" in namespace "gc-3736" Jan 29 15:04:06.619: INFO: Deleting pod "simpletest.rc-2md6h" in namespace "gc-3736" Jan 29 15:04:06.628: INFO: Deleting pod "simpletest.rc-2qh6t" in namespace "gc-3736" Jan 29 15:04:06.637: INFO: Deleting pod "simpletest.rc-42kq7" in namespace "gc-3736" Jan 29 15:04:06.661: INFO: Deleting pod "simpletest.rc-479r7" in namespace "gc-3736" Jan 29 15:04:06.674: INFO: Deleting pod "simpletest.rc-4jvwb" in namespace "gc-3736" Jan 29 15:04:06.689: INFO: Deleting pod "simpletest.rc-4nqhm" in namespace "gc-3736" Jan 29 15:04:06.707: INFO: Deleting pod "simpletest.rc-55sz4" in namespace "gc-3736" Jan 29 15:04:06.727: INFO: Deleting pod "simpletest.rc-5jspg" in namespace "gc-3736" Jan 29 15:04:06.761: INFO: Deleting pod "simpletest.rc-5vwjx" in namespace "gc-3736" Jan 29 15:04:06.784: INFO: Deleting pod "simpletest.rc-67d9s" in namespace "gc-3736" Jan 29 15:04:06.867: INFO: Deleting pod "simpletest.rc-6844t" in namespace "gc-3736" Jan 29 15:04:06.889: INFO: Deleting pod "simpletest.rc-6cd2z" in namespace "gc-3736" Jan 29 15:04:06.916: INFO: Deleting pod "simpletest.rc-6p5bw" in namespace "gc-3736" Jan 29 15:04:06.958: INFO: Deleting pod "simpletest.rc-6rtcw" in namespace "gc-3736" Jan 29 15:04:06.972: INFO: Deleting pod "simpletest.rc-6t6fl" in namespace "gc-3736" Jan 29 15:04:07.019: INFO: Deleting pod "simpletest.rc-8f8s4" in namespace "gc-3736" Jan 29 15:04:07.036: INFO: Deleting pod "simpletest.rc-8fwv2" in namespace "gc-3736" Jan 29 15:04:07.078: INFO: Deleting pod "simpletest.rc-8psr7" in namespace "gc-3736" Jan 29 15:04:07.108: INFO: Deleting pod "simpletest.rc-8vx4n" in namespace "gc-3736" Jan 29 15:04:07.150: INFO: Deleting pod "simpletest.rc-8zjws" in namespace "gc-3736" Jan 29 15:04:07.174: INFO: Deleting pod "simpletest.rc-974dr" in namespace "gc-3736" Jan 29 15:04:07.200: INFO: Deleting pod "simpletest.rc-9njqh" in namespace "gc-3736" Jan 29 15:04:07.243: INFO: Deleting pod "simpletest.rc-9r9lq" in namespace "gc-3736" Jan 29 15:04:07.267: INFO: Deleting pod "simpletest.rc-9vt9r" in namespace "gc-3736" Jan 29 15:04:07.292: INFO: Deleting pod "simpletest.rc-9xkrd" in namespace "gc-3736" Jan 29 15:04:07.311: INFO: Deleting pod "simpletest.rc-b88ls" in namespace "gc-3736" Jan 29 15:04:07.334: INFO: Deleting pod "simpletest.rc-bq6xh" in namespace "gc-3736" Jan 29 15:04:07.388: INFO: Deleting pod "simpletest.rc-bzv84" in namespace "gc-3736" Jan 29 15:04:07.416: INFO: Deleting pod "simpletest.rc-c6hpk" in namespace "gc-3736" Jan 29 15:04:07.428: INFO: Deleting pod "simpletest.rc-c8dsj" in namespace "gc-3736" Jan 29 15:04:07.463: INFO: Deleting pod "simpletest.rc-c9zk7" in namespace "gc-3736" Jan 29 15:04:07.481: INFO: Deleting pod "simpletest.rc-cc6pk" in namespace "gc-3736" Jan 29 15:04:07.520: INFO: Deleting pod "simpletest.rc-crgsv" in namespace "gc-3736" Jan 29 15:04:07.586: INFO: Deleting pod "simpletest.rc-d4sl2" in namespace "gc-3736" Jan 29 15:04:07.616: INFO: Deleting pod "simpletest.rc-d6bsc" in namespace "gc-3736" Jan 29 15:04:07.691: INFO: Deleting pod "simpletest.rc-dfhp4" in namespace "gc-3736" Jan 29 15:04:07.730: INFO: Deleting pod "simpletest.rc-dfv2s" in namespace "gc-3736" Jan 29 15:04:07.765: INFO: Deleting pod "simpletest.rc-dvhm5" in namespace "gc-3736" Jan 29 15:04:07.805: INFO: Deleting pod "simpletest.rc-dw8vn" in namespace "gc-3736" Jan 29 15:04:07.832: INFO: Deleting pod "simpletest.rc-dz9ft" in namespace "gc-3736" Jan 29 15:04:07.858: INFO: Deleting pod "simpletest.rc-f9w4t" in namespace "gc-3736" Jan 29 15:04:07.894: INFO: Deleting pod "simpletest.rc-fd4bc" in namespace "gc-3736" Jan 29 15:04:07.947: INFO: Deleting pod "simpletest.rc-gm46q" in namespace "gc-3736" Jan 29 15:04:08.149: INFO: Deleting pod "simpletest.rc-gsmwd" in namespace "gc-3736" Jan 29 15:04:08.215: INFO: Deleting pod "simpletest.rc-hdf8s" in namespace "gc-3736" Jan 29 15:04:08.291: INFO: Deleting pod "simpletest.rc-hdn49" in namespace "gc-3736" Jan 29 15:04:08.404: INFO: Deleting pod "simpletest.rc-hp9zh" in namespace "gc-3736" Jan 29 15:04:08.458: INFO: Deleting pod "simpletest.rc-hvjmm" in namespace "gc-3736" Jan 29 15:04:08.487: INFO: Deleting pod "simpletest.rc-hvtx9" in namespace "gc-3736" Jan 29 15:04:08.537: INFO: Deleting pod "simpletest.rc-j65vc" in namespace "gc-3736" Jan 29 15:04:08.554: INFO: Deleting pod "simpletest.rc-jfhqb" in namespace "gc-3736" Jan 29 15:04:08.627: INFO: Deleting pod "simpletest.rc-jlm8l" in namespace "gc-3736" Jan 29 15:04:08.646: INFO: Deleting pod "simpletest.rc-k9f9g" in namespace "gc-3736" Jan 29 15:04:08.691: INFO: Deleting pod "simpletest.rc-kcp5j" in namespace "gc-3736" Jan 29 15:04:08.708: INFO: Deleting pod "simpletest.rc-lbfmd" in namespace "gc-3736" Jan 29 15:04:08.737: INFO: Deleting pod "simpletest.rc-m7vnc" in namespace "gc-3736" Jan 29 15:04:08.773: INFO: Deleting pod "simpletest.rc-nn4kh" in namespace "gc-3736" Jan 29 15:04:08.812: INFO: Deleting pod "simpletest.rc-nx4l2" in namespace "gc-3736" Jan 29 15:04:08.851: INFO: Deleting pod "simpletest.rc-p4df2" in namespace "gc-3736" Jan 29 15:04:08.891: INFO: Deleting pod "simpletest.rc-p9nkw" in namespace "gc-3736" Jan 29 15:04:08.975: INFO: Deleting pod "simpletest.rc-pmgp6" in namespace "gc-3736" Jan 29 15:04:09.044: INFO: Deleting pod "simpletest.rc-pmzw6" in namespace "gc-3736" Jan 29 15:04:09.179: INFO: Deleting pod "simpletest.rc-pz7k6" in namespace "gc-3736" Jan 29 15:04:09.228: INFO: Deleting pod "simpletest.rc-q85q4" in namespace "gc-3736" Jan 29 15:04:09.345: INFO: Deleting pod "simpletest.rc-q9mnn" in namespace "gc-3736" Jan 29 15:04:09.420: INFO: Deleting pod "simpletest.rc-qr9kz" in namespace "gc-3736" Jan 29 15:04:09.438: INFO: Deleting pod "simpletest.rc-qv7xx" in namespace "gc-3736" Jan 29 15:04:09.460: INFO: Deleting pod "simpletest.rc-rcmtz" in namespace "gc-3736" Jan 29 15:04:09.480: INFO: Deleting pod "simpletest.rc-rcp9c" in namespace "gc-3736" Jan 29 15:04:09.531: INFO: Deleting pod "simpletest.rc-rgng7" in namespace "gc-3736" Jan 29 15:04:09.549: INFO: Deleting pod "simpletest.rc-rlngc" in namespace "gc-3736" Jan 29 15:04:09.575: INFO: Deleting pod "simpletest.rc-rnzcx" in namespace "gc-3736" Jan 29 15:04:09.610: INFO: Deleting pod "simpletest.rc-rr7kb" in namespace "gc-3736" Jan 29 15:04:09.658: INFO: Deleting pod "simpletest.rc-rx58x" in namespace "gc-3736" Jan 29 15:04:09.687: INFO: Deleting pod "simpletest.rc-s466z" in namespace "gc-3736" Jan 29 15:04:09.708: INFO: Deleting pod "simpletest.rc-s59cj" in namespace "gc-3736" Jan 29 15:04:09.751: INFO: Deleting pod "simpletest.rc-sczcs" in namespace "gc-3736" Jan 29 15:04:09.770: INFO: Deleting pod "simpletest.rc-sst6w" in namespace "gc-3736" Jan 29 15:04:09.799: INFO: Deleting pod "simpletest.rc-swgsv" in namespace "gc-3736" Jan 29 15:04:09.872: INFO: Deleting pod "simpletest.rc-t5df8" in namespace "gc-3736" Jan 29 15:04:09.937: INFO: Deleting pod "simpletest.rc-t7nt4" in namespace "gc-3736" Jan 29 15:04:09.957: INFO: Deleting pod "simpletest.rc-tdrq2" in namespace "gc-3736" Jan 29 15:04:10.024: INFO: Deleting pod "simpletest.rc-tnnrf" in namespace "gc-3736" Jan 29 15:04:10.039: INFO: Deleting pod "simpletest.rc-v5kw9" in namespace "gc-3736" Jan 29 15:04:10.066: INFO: Deleting pod "simpletest.rc-v9ktx" in namespace "gc-3736" Jan 29 15:04:10.089: INFO: Deleting pod "simpletest.rc-vcwvq" in namespace "gc-3736" Jan 29 15:04:10.114: INFO: Deleting pod "simpletest.rc-vftjj" in namespace "gc-3736" Jan 29 15:04:10.157: INFO: Deleting pod "simpletest.rc-vsvs4" in namespace "gc-3736" Jan 29 15:04:10.173: INFO: Deleting pod "simpletest.rc-wbscf" in namespace "gc-3736" Jan 29 15:04:10.212: INFO: Deleting pod "simpletest.rc-x5cln" in namespace "gc-3736" Jan 29 15:04:10.240: INFO: Deleting pod "simpletest.rc-x8rft" in namespace "gc-3736" Jan 29 15:04:10.260: INFO: Deleting pod "simpletest.rc-xl88g" in namespace "gc-3736" Jan 29 15:04:10.296: INFO: Deleting pod "simpletest.rc-xvqvz" in namespace "gc-3736" Jan 29 15:04:10.331: INFO: Deleting pod "simpletest.rc-xx57j" in namespace "gc-3736" Jan 29 15:04:10.506: INFO: Deleting pod "simpletest.rc-xxqws" in namespace "gc-3736" Jan 29 15:04:10.642: INFO: Deleting pod "simpletest.rc-zstmv" in namespace "gc-3736" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:10.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-3736" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":15,"skipped":364,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:56.991: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Discovering how many secrets are in namespace by default �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Secret �[1mSTEP�[0m: Ensuring resource quota status captures secret creation �[1mSTEP�[0m: Deleting a secret �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:14.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-5379" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":15,"skipped":211,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:14.238: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename runtimeclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/node.k8s.io �[1mSTEP�[0m: getting /apis/node.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: watching Jan 29 15:04:14.278: INFO: starting watch �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 29 15:04:14.303: INFO: waiting for watch events with expected annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:14.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "runtimeclass-2987" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":16,"skipped":220,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:10.910: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:04:10.977: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 29 15:04:13.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2641 --namespace=crd-publish-openapi-2641 create -f -' Jan 29 15:04:14.452: INFO: stderr: "" Jan 29 15:04:14.452: INFO: stdout: "e2e-test-crd-publish-openapi-2435-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 29 15:04:14.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2641 --namespace=crd-publish-openapi-2641 delete e2e-test-crd-publish-openapi-2435-crds test-cr' Jan 29 15:04:14.536: INFO: stderr: "" Jan 29 15:04:14.536: INFO: stdout: "e2e-test-crd-publish-openapi-2435-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 29 15:04:14.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2641 --namespace=crd-publish-openapi-2641 apply -f -' Jan 29 15:04:14.768: INFO: stderr: "" Jan 29 15:04:14.768: INFO: stdout: "e2e-test-crd-publish-openapi-2435-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 29 15:04:14.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2641 --namespace=crd-publish-openapi-2641 delete e2e-test-crd-publish-openapi-2435-crds test-cr' Jan 29 15:04:14.878: INFO: stderr: "" Jan 29 15:04:14.878: INFO: stdout: "e2e-test-crd-publish-openapi-2435-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 29 15:04:14.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2641 explain e2e-test-crd-publish-openapi-2435-crds' Jan 29 15:04:15.075: INFO: stderr: "" Jan 29 15:04:15.075: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2435-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:17.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-2641" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":16,"skipped":389,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:17.484: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:17.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-465" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":17,"skipped":391,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:14.345: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 29 15:04:14.779: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 29 15:04:17.809: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the crd webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource definition that should be denied by the webhook Jan 29 15:04:17.829: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:17.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4301" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4301-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":17,"skipped":223,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:17.523: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 29 15:04:17.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10adddad-59b5-43e8-8300-a0b0a1d559bf" in namespace "downward-api-5987" to be "Succeeded or Failed" Jan 29 15:04:17.580: INFO: Pod "downwardapi-volume-10adddad-59b5-43e8-8300-a0b0a1d559bf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.392852ms Jan 29 15:04:19.585: INFO: Pod "downwardapi-volume-10adddad-59b5-43e8-8300-a0b0a1d559bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008687075s Jan 29 15:04:21.590: INFO: Pod "downwardapi-volume-10adddad-59b5-43e8-8300-a0b0a1d559bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013658829s �[1mSTEP�[0m: Saw pod success Jan 29 15:04:21.590: INFO: Pod "downwardapi-volume-10adddad-59b5-43e8-8300-a0b0a1d559bf" satisfied condition "Succeeded or Failed" Jan 29 15:04:21.593: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod downwardapi-volume-10adddad-59b5-43e8-8300-a0b0a1d559bf container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:04:21.606: INFO: Waiting for pod downwardapi-volume-10adddad-59b5-43e8-8300-a0b0a1d559bf to disappear Jan 29 15:04:21.609: INFO: Pod downwardapi-volume-10adddad-59b5-43e8-8300-a0b0a1d559bf no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:21.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5987" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":392,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:17.984: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:04:18.014: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-bb0833c0-f04c-46f1-a13c-b863b69bc06a" in namespace "security-context-test-5431" to be "Succeeded or Failed" Jan 29 15:04:18.018: INFO: Pod "busybox-privileged-false-bb0833c0-f04c-46f1-a13c-b863b69bc06a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.864877ms Jan 29 15:04:20.022: INFO: Pod "busybox-privileged-false-bb0833c0-f04c-46f1-a13c-b863b69bc06a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007906022s Jan 29 15:04:22.026: INFO: Pod "busybox-privileged-false-bb0833c0-f04c-46f1-a13c-b863b69bc06a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012086133s Jan 29 15:04:22.027: INFO: Pod "busybox-privileged-false-bb0833c0-f04c-46f1-a13c-b863b69bc06a" satisfied condition "Succeeded or Failed" Jan 29 15:04:22.032: INFO: Got logs for pod "busybox-privileged-false-bb0833c0-f04c-46f1-a13c-b863b69bc06a": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:22.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-5431" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":255,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:21.627: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-d3275301-9002-4593-8488-07cccc65b1a6 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 29 15:04:21.660: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79e00a43-3d07-4ccd-a3a0-a592cd311de6" in namespace "projected-8009" to be "Succeeded or Failed" Jan 29 15:04:21.664: INFO: Pod "pod-projected-configmaps-79e00a43-3d07-4ccd-a3a0-a592cd311de6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.642152ms Jan 29 15:04:23.672: INFO: Pod "pod-projected-configmaps-79e00a43-3d07-4ccd-a3a0-a592cd311de6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011496029s Jan 29 15:04:25.676: INFO: Pod "pod-projected-configmaps-79e00a43-3d07-4ccd-a3a0-a592cd311de6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015396444s �[1mSTEP�[0m: Saw pod success Jan 29 15:04:25.676: INFO: Pod "pod-projected-configmaps-79e00a43-3d07-4ccd-a3a0-a592cd311de6" satisfied condition "Succeeded or Failed" Jan 29 15:04:25.679: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-projected-configmaps-79e00a43-3d07-4ccd-a3a0-a592cd311de6 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:04:25.694: INFO: Waiting for pod pod-projected-configmaps-79e00a43-3d07-4ccd-a3a0-a592cd311de6 to disappear Jan 29 15:04:25.697: INFO: Pod pod-projected-configmaps-79e00a43-3d07-4ccd-a3a0-a592cd311de6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:25.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8009" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":398,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:25.713: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: closing the watch once it receives two notifications Jan 29 15:04:25.745: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9193 62b7d5f4-6437-43a0-9b85-53f4c2202af3 9715 0 2023-01-29 15:04:25 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-29 15:04:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 29 15:04:25.745: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9193 62b7d5f4-6437-43a0-9b85-53f4c2202af3 9717 0 2023-01-29 15:04:25 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-29 15:04:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time, while the watch is closed �[1mSTEP�[0m: creating a new watch on configmaps from the last resource version observed by the first watch �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 29 15:04:25.756: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9193 62b7d5f4-6437-43a0-9b85-53f4c2202af3 9718 0 2023-01-29 15:04:25 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-29 15:04:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 29 15:04:25.757: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9193 62b7d5f4-6437-43a0-9b85-53f4c2202af3 9719 0 2023-01-29 15:04:25 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-29 15:04:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:25.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-9193" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":20,"skipped":405,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:25.860: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-0f0e8471-8dae-4851-90d7-38da77981abd �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 29 15:04:25.891: INFO: Waiting up to 5m0s for pod "pod-secrets-34413a96-a055-4b20-a9b3-4bf9fdef1779" in namespace "secrets-9847" to be "Succeeded or Failed" Jan 29 15:04:25.893: INFO: Pod "pod-secrets-34413a96-a055-4b20-a9b3-4bf9fdef1779": Phase="Pending", Reason="", readiness=false. Elapsed: 2.835754ms Jan 29 15:04:27.899: INFO: Pod "pod-secrets-34413a96-a055-4b20-a9b3-4bf9fdef1779": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007929225s Jan 29 15:04:29.903: INFO: Pod "pod-secrets-34413a96-a055-4b20-a9b3-4bf9fdef1779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012579153s �[1mSTEP�[0m: Saw pod success Jan 29 15:04:29.903: INFO: Pod "pod-secrets-34413a96-a055-4b20-a9b3-4bf9fdef1779" satisfied condition "Succeeded or Failed" Jan 29 15:04:29.906: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-secrets-34413a96-a055-4b20-a9b3-4bf9fdef1779 container secret-env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:04:29.922: INFO: Waiting for pod pod-secrets-34413a96-a055-4b20-a9b3-4bf9fdef1779 to disappear Jan 29 15:04:29.925: INFO: Pod pod-secrets-34413a96-a055-4b20-a9b3-4bf9fdef1779 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:29.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9847" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":473,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:08.246: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod liveness-23dc8ceb-d572-43b6-884a-28c9ef0df6cd in namespace container-probe-4738 Jan 29 15:04:14.425: INFO: Started pod liveness-23dc8ceb-d572-43b6-884a-28c9ef0df6cd in namespace container-probe-4738 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 29 15:04:14.428: INFO: Initial restart count of pod liveness-23dc8ceb-d572-43b6-884a-28c9ef0df6cd is 0 Jan 29 15:04:30.477: INFO: Restart count of pod container-probe-4738/liveness-23dc8ceb-d572-43b6-884a-28c9ef0df6cd is now 1 (16.049016393s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:30.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-4738" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":188,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:30.518: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 29 15:04:30.549: INFO: Waiting up to 5m0s for pod "downward-api-6cbb6f1f-d7d1-4fbd-af2f-8a28f0620945" in namespace "downward-api-6482" to be "Succeeded or Failed" Jan 29 15:04:30.559: INFO: Pod "downward-api-6cbb6f1f-d7d1-4fbd-af2f-8a28f0620945": Phase="Pending", Reason="", readiness=false. Elapsed: 8.799287ms Jan 29 15:04:32.564: INFO: Pod "downward-api-6cbb6f1f-d7d1-4fbd-af2f-8a28f0620945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013143594s Jan 29 15:04:34.575: INFO: Pod "downward-api-6cbb6f1f-d7d1-4fbd-af2f-8a28f0620945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024738051s �[1mSTEP�[0m: Saw pod success Jan 29 15:04:34.575: INFO: Pod "downward-api-6cbb6f1f-d7d1-4fbd-af2f-8a28f0620945" satisfied condition "Succeeded or Failed" Jan 29 15:04:34.579: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod downward-api-6cbb6f1f-d7d1-4fbd-af2f-8a28f0620945 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:04:34.607: INFO: Waiting for pod downward-api-6cbb6f1f-d7d1-4fbd-af2f-8a28f0620945 to disappear Jan 29 15:04:34.610: INFO: Pod downward-api-6cbb6f1f-d7d1-4fbd-af2f-8a28f0620945 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:34.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6482" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":199,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:29.966: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 29 15:04:30.001: INFO: The status of Pod labelsupdateb0d5615a-aa85-424c-96c3-a7e8904172bb is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:04:32.014: INFO: The status of Pod labelsupdateb0d5615a-aa85-424c-96c3-a7e8904172bb is Running (Ready = true) Jan 29 15:04:32.543: INFO: Successfully updated pod "labelsupdateb0d5615a-aa85-424c-96c3-a7e8904172bb" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:36.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1146" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":489,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:36.621: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: creating a watch on configmaps from the resource version returned by the first update �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap after the first update Jan 29 15:04:36.684: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6335 10d4c359-585e-4071-8d16-0e00f4acddb5 9954 0 2023-01-29 15:04:36 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-29 15:04:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 29 15:04:36.684: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6335 10d4c359-585e-4071-8d16-0e00f4acddb5 9956 0 2023-01-29 15:04:36 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-29 15:04:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:36.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-6335" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":23,"skipped":513,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:36.698: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:04:36.723: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with known and required properties Jan 29 15:04:39.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 --namespace=crd-publish-openapi-272 create -f -' Jan 29 15:04:40.220: INFO: stderr: "" Jan 29 15:04:40.220: INFO: stdout: "e2e-test-crd-publish-openapi-5138-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 29 15:04:40.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 --namespace=crd-publish-openapi-272 delete e2e-test-crd-publish-openapi-5138-crds test-foo' Jan 29 15:04:40.297: INFO: stderr: "" Jan 29 15:04:40.297: INFO: stdout: "e2e-test-crd-publish-openapi-5138-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 29 15:04:40.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 --namespace=crd-publish-openapi-272 apply -f -' Jan 29 15:04:40.502: INFO: stderr: "" Jan 29 15:04:40.502: INFO: stdout: "e2e-test-crd-publish-openapi-5138-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 29 15:04:40.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 --namespace=crd-publish-openapi-272 delete e2e-test-crd-publish-openapi-5138-crds test-foo' Jan 29 15:04:40.588: INFO: stderr: "" Jan 29 15:04:40.588: INFO: stdout: "e2e-test-crd-publish-openapi-5138-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with value outside defined enum values Jan 29 15:04:40.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 --namespace=crd-publish-openapi-272 create -f -' Jan 29 15:04:40.791: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 29 15:04:40.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 --namespace=crd-publish-openapi-272 create -f -' Jan 29 15:04:40.999: INFO: rc: 1 Jan 29 15:04:40.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 --namespace=crd-publish-openapi-272 apply -f -' Jan 29 15:04:41.199: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request without required properties Jan 29 15:04:41.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 --namespace=crd-publish-openapi-272 create -f -' Jan 29 15:04:41.385: INFO: rc: 1 Jan 29 15:04:41.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 --namespace=crd-publish-openapi-272 apply -f -' Jan 29 15:04:41.573: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Jan 29 15:04:41.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 explain e2e-test-crd-publish-openapi-5138-crds' Jan 29 15:04:41.799: INFO: stderr: "" Jan 29 15:04:41.799: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5138-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Jan 29 15:04:41.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 explain e2e-test-crd-publish-openapi-5138-crds.metadata' Jan 29 15:04:42.170: INFO: stderr: "" Jan 29 15:04:42.170: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5138-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 29 15:04:42.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 explain e2e-test-crd-publish-openapi-5138-crds.spec' Jan 29 15:04:42.425: INFO: stderr: "" Jan 29 15:04:42.425: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5138-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 29 15:04:42.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 explain e2e-test-crd-publish-openapi-5138-crds.spec.bars' Jan 29 15:04:42.652: INFO: stderr: "" Jan 29 15:04:42.652: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5138-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Jan 29 15:04:42.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-272 explain e2e-test-crd-publish-openapi-5138-crds.spec.bars2' Jan 29 15:04:42.863: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:45.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-272" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":24,"skipped":516,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:45.107: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename discovery �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 �[1mSTEP�[0m: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:04:45.622: INFO: Checking APIGroup: apiregistration.k8s.io Jan 29 15:04:45.624: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jan 29 15:04:45.624: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] Jan 29 15:04:45.624: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jan 29 15:04:45.624: INFO: Checking APIGroup: apps Jan 29 15:04:45.625: INFO: PreferredVersion.GroupVersion: apps/v1 Jan 29 15:04:45.625: INFO: Versions found [{apps/v1 v1}] Jan 29 15:04:45.625: INFO: apps/v1 matches apps/v1 Jan 29 15:04:45.625: INFO: Checking APIGroup: events.k8s.io Jan 29 15:04:45.626: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jan 29 15:04:45.626: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jan 29 15:04:45.626: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jan 29 15:04:45.626: INFO: Checking APIGroup: authentication.k8s.io Jan 29 15:04:45.628: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jan 29 15:04:45.628: INFO: Versions found [{authentication.k8s.io/v1 v1}] Jan 29 15:04:45.628: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jan 29 15:04:45.628: INFO: Checking APIGroup: authorization.k8s.io Jan 29 15:04:45.629: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jan 29 15:04:45.629: INFO: Versions found [{authorization.k8s.io/v1 v1}] Jan 29 15:04:45.629: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jan 29 15:04:45.629: INFO: Checking APIGroup: autoscaling Jan 29 15:04:45.630: INFO: PreferredVersion.GroupVersion: autoscaling/v2 Jan 29 15:04:45.630: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jan 29 15:04:45.630: INFO: autoscaling/v2 matches autoscaling/v2 Jan 29 15:04:45.630: INFO: Checking APIGroup: batch Jan 29 15:04:45.632: INFO: PreferredVersion.GroupVersion: batch/v1 Jan 29 15:04:45.632: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jan 29 15:04:45.632: INFO: batch/v1 matches batch/v1 Jan 29 15:04:45.632: INFO: Checking APIGroup: certificates.k8s.io Jan 29 15:04:45.633: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jan 29 15:04:45.633: INFO: Versions found [{certificates.k8s.io/v1 v1}] Jan 29 15:04:45.633: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jan 29 15:04:45.633: INFO: Checking APIGroup: networking.k8s.io Jan 29 15:04:45.634: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jan 29 15:04:45.634: INFO: Versions found [{networking.k8s.io/v1 v1}] Jan 29 15:04:45.634: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jan 29 15:04:45.634: INFO: Checking APIGroup: policy Jan 29 15:04:45.635: INFO: PreferredVersion.GroupVersion: policy/v1 Jan 29 15:04:45.635: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Jan 29 15:04:45.635: INFO: policy/v1 matches policy/v1 Jan 29 15:04:45.635: INFO: Checking APIGroup: rbac.authorization.k8s.io Jan 29 15:04:45.636: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jan 29 15:04:45.636: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] Jan 29 15:04:45.636: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jan 29 15:04:45.636: INFO: Checking APIGroup: storage.k8s.io Jan 29 15:04:45.637: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jan 29 15:04:45.637: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jan 29 15:04:45.637: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jan 29 15:04:45.637: INFO: Checking APIGroup: admissionregistration.k8s.io Jan 29 15:04:45.638: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jan 29 15:04:45.638: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] Jan 29 15:04:45.638: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jan 29 15:04:45.638: INFO: Checking APIGroup: apiextensions.k8s.io Jan 29 15:04:45.639: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jan 29 15:04:45.639: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] Jan 29 15:04:45.639: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jan 29 15:04:45.639: INFO: Checking APIGroup: scheduling.k8s.io Jan 29 15:04:45.640: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jan 29 15:04:45.640: INFO: Versions found [{scheduling.k8s.io/v1 v1}] Jan 29 15:04:45.640: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jan 29 15:04:45.640: INFO: Checking APIGroup: coordination.k8s.io Jan 29 15:04:45.641: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jan 29 15:04:45.641: INFO: Versions found [{coordination.k8s.io/v1 v1}] Jan 29 15:04:45.641: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jan 29 15:04:45.641: INFO: Checking APIGroup: node.k8s.io Jan 29 15:04:45.642: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jan 29 15:04:45.642: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jan 29 15:04:45.642: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jan 29 15:04:45.642: INFO: Checking APIGroup: discovery.k8s.io Jan 29 15:04:45.644: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Jan 29 15:04:45.644: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Jan 29 15:04:45.644: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Jan 29 15:04:45.644: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jan 29 15:04:45.645: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 Jan 29 15:04:45.645: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jan 29 15:04:45.645: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:45.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "discovery-7592" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":25,"skipped":521,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:45.676: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:04:45.705: INFO: The status of Pod busybox-readonly-fsfa1548f2-1eb3-4e31-852b-02565cd01fd4 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:04:47.709: INFO: The status of Pod busybox-readonly-fsfa1548f2-1eb3-4e31-852b-02565cd01fd4 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:47.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-6137" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":537,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:22.068: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating all guestbook components Jan 29 15:04:22.091: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 29 15:04:22.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 create -f -' Jan 29 15:04:22.842: INFO: stderr: "" Jan 29 15:04:22.842: INFO: stdout: "service/agnhost-replica created\n" Jan 29 15:04:22.843: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 29 15:04:22.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 create -f -' Jan 29 15:04:23.155: INFO: stderr: "" Jan 29 15:04:23.155: INFO: stdout: "service/agnhost-primary created\n" Jan 29 15:04:23.155: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 29 15:04:23.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 create -f -' Jan 29 15:04:23.375: INFO: stderr: "" Jan 29 15:04:23.375: INFO: stdout: "service/frontend created\n" Jan 29 15:04:23.375: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 29 15:04:23.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 create -f -' Jan 29 15:04:23.638: INFO: stderr: "" Jan 29 15:04:23.638: INFO: stdout: "deployment.apps/frontend created\n" Jan 29 15:04:23.638: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 29 15:04:23.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 create -f -' Jan 29 15:04:23.865: INFO: stderr: "" Jan 29 15:04:23.865: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 29 15:04:23.865: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 29 15:04:23.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 create -f -' Jan 29 15:04:24.134: INFO: stderr: "" Jan 29 15:04:24.134: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 29 15:04:24.134: INFO: Waiting for all frontend pods to be Running. Jan 29 15:04:29.187: INFO: Waiting for frontend to serve content. Jan 29 15:04:34.197: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 29 15:04:44.213: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 29 15:04:49.222: INFO: Trying to add a new entry to the guestbook. Jan 29 15:04:49.230: INFO: Verifying that added entry can be retrieved. �[1mSTEP�[0m: using delete to clean up resources Jan 29 15:04:49.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 delete --grace-period=0 --force -f -' Jan 29 15:04:49.328: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 15:04:49.328: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 29 15:04:49.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 delete --grace-period=0 --force -f -' Jan 29 15:04:49.443: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 15:04:49.443: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 29 15:04:49.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 delete --grace-period=0 --force -f -' Jan 29 15:04:49.545: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 15:04:49.545: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 29 15:04:49.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 delete --grace-period=0 --force -f -' Jan 29 15:04:49.631: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 15:04:49.631: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 29 15:04:49.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 delete --grace-period=0 --force -f -' Jan 29 15:04:49.762: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 15:04:49.762: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 29 15:04:49.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4431 delete --grace-period=0 --force -f -' Jan 29 15:04:49.871: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 15:04:49.871: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:49.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4431" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":19,"skipped":271,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:47.757: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1573 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 29 15:04:47.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9934 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Jan 29 15:04:47.880: INFO: stderr: "" Jan 29 15:04:47.880: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Jan 29 15:04:52.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9934 get pod e2e-test-httpd-pod -o json' Jan 29 15:04:53.030: INFO: stderr: "" Jan 29 15:04:53.030: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2023-01-29T15:04:47Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9934\",\n \"resourceVersion\": \"10119\",\n \"uid\": \"c06d5463-1e1a-4b01-b00d-9333bb312463\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-czz57\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-pw1vby-worker-biy623\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-czz57\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-29T15:04:47Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-29T15:04:48Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-29T15:04:48Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-29T15:04:47Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://66e32cdfff04dba940090ee7b1ab98eac6530aaad695ca44d4241a3a86011b32\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-01-29T15:04:48Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.6.62\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.6.62\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-01-29T15:04:47Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Jan 29 15:04:53.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9934 replace -f -' Jan 29 15:04:53.657: INFO: stderr: "" Jan 29 15:04:53.657: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 Jan 29 15:04:53.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9934 delete pods e2e-test-httpd-pod' Jan 29 15:04:55.713: INFO: stderr: "" Jan 29 15:04:55.713: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:55.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9934" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":27,"skipped":555,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:49.903: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 29 15:04:49.949: INFO: The status of Pod labelsupdate6d6d4a18-a995-4d4e-bce3-5f208212b9a8 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:04:51.953: INFO: The status of Pod labelsupdate6d6d4a18-a995-4d4e-bce3-5f208212b9a8 is Running (Ready = true) Jan 29 15:04:52.475: INFO: Successfully updated pod "labelsupdate6d6d4a18-a995-4d4e-bce3-5f208212b9a8" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:56.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3130" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":276,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:55.844: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: expected 0 rs, got 1 rs �[1mSTEP�[0m: expected 0 pods, got 2 pods �[1mSTEP�[0m: Gathering metrics Jan 29 15:04:56.928: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-pw1vby-8nwgl-sl9bk is Running (Ready = true) Jan 29 15:04:56.996: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:56.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-146" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":28,"skipped":639,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:57.128: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:57.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-4495" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":29,"skipped":682,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:57.256: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ServiceAccount �[1mSTEP�[0m: watching for the ServiceAccount to be added �[1mSTEP�[0m: patching the ServiceAccount �[1mSTEP�[0m: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) �[1mSTEP�[0m: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:57.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-9448" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":30,"skipped":723,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:56.532: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:04:56.555: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Jan 29 15:04:57.590: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:04:58.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-5655" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":21,"skipped":292,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:58.686: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 29 15:04:58.714: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-1791 4d5d44d5-5b86-4c21-b7c8-d0c9ba452b69 10443 0 2023-01-29 15:04:58 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2023-01-29 15:04:58 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vmsln,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vmsln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:04:58.722: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:00.727: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) �[1mSTEP�[0m: Verifying customized DNS suffix list is configured on pod... Jan 29 15:05:00.727: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1791 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:05:00.728: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:05:00.728: INFO: ExecWithOptions: Clientset creation Jan 29 15:05:00.728: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/dns-1791/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: Verifying customized DNS server is configured on pod... Jan 29 15:05:00.832: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1791 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:05:00.832: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:05:00.833: INFO: ExecWithOptions: Clientset creation Jan 29 15:05:00.833: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/dns-1791/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:05:00.934: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:00.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-1791" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":22,"skipped":337,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:57.381: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename limitrange �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a LimitRange �[1mSTEP�[0m: Setting up watch �[1mSTEP�[0m: Submitting a LimitRange Jan 29 15:04:57.413: INFO: observed the limitRanges list �[1mSTEP�[0m: Verifying LimitRange creation was observed �[1mSTEP�[0m: Fetching the LimitRange to ensure it has proper values Jan 29 15:04:57.419: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 29 15:04:57.419: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with no resource requirements �[1mSTEP�[0m: Ensuring Pod has resource requirements applied from LimitRange Jan 29 15:04:57.427: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 29 15:04:57.427: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with partial resource requirements �[1mSTEP�[0m: Ensuring Pod has merged resource requirements applied from LimitRange Jan 29 15:04:57.437: INFO: Verifying requests: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] Jan 29 15:04:57.437: INFO: Verifying limits: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Failing to create a Pod with less than min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Updating a LimitRange �[1mSTEP�[0m: Verifying LimitRange updating is effective �[1mSTEP�[0m: Creating a Pod with less than former min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Deleting a LimitRange �[1mSTEP�[0m: Verifying the LimitRange was deleted Jan 29 15:05:04.490: INFO: limitRange is already deleted �[1mSTEP�[0m: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:04.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "limitrange-4932" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":31,"skipped":754,"failed":0} [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:04.512: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption is created Jan 29 15:05:04.543: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:06.547: INFO: The status of Pod pod-adoption is Running (Ready = true) �[1mSTEP�[0m: When a replication controller with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:07.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-5390" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":32,"skipped":754,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:07.592: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-ff5176e0-2aeb-45c9-a432-21b494da2478 Jan 29 15:05:07.622: INFO: Pod name my-hostname-basic-ff5176e0-2aeb-45c9-a432-21b494da2478: Found 0 pods out of 1 Jan 29 15:05:12.626: INFO: Pod name my-hostname-basic-ff5176e0-2aeb-45c9-a432-21b494da2478: Found 1 pods out of 1 Jan 29 15:05:12.626: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ff5176e0-2aeb-45c9-a432-21b494da2478" are running Jan 29 15:05:12.629: INFO: Pod "my-hostname-basic-ff5176e0-2aeb-45c9-a432-21b494da2478-f9qt6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 15:05:07 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 15:05:08 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 15:05:08 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 15:05:07 +0000 UTC Reason: Message:}]) Jan 29 15:05:12.629: INFO: Trying to dial the pod Jan 29 15:05:17.640: INFO: Controller my-hostname-basic-ff5176e0-2aeb-45c9-a432-21b494da2478: Got expected result from replica 1 [my-hostname-basic-ff5176e0-2aeb-45c9-a432-21b494da2478-f9qt6]: "my-hostname-basic-ff5176e0-2aeb-45c9-a432-21b494da2478-f9qt6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:17.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-3908" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":33,"skipped":762,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:00.963: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-4535 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 29 15:05:00.983: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 29 15:05:01.022: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:03.026: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 29 15:05:05.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 29 15:05:07.026: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 29 15:05:09.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 29 15:05:11.026: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 29 15:05:13.026: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 29 15:05:13.033: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 29 15:05:13.038: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 29 15:05:13.043: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 29 15:05:15.075: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 29 15:05:15.075: INFO: Going to poll 192.168.0.86 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 29 15:05:15.077: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.0.86 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4535 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:05:15.077: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:05:15.081: INFO: ExecWithOptions: Clientset creation Jan 29 15:05:15.081: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-4535/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.0.86+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:05:16.163: INFO: Found all 1 expected endpoints: [netserver-0] Jan 29 15:05:16.164: INFO: Going to poll 192.168.1.64 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 29 15:05:16.167: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.1.64 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4535 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:05:16.167: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:05:16.168: INFO: ExecWithOptions: Clientset creation Jan 29 15:05:16.168: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-4535/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.1.64+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:05:17.241: INFO: Found all 1 expected endpoints: [netserver-1] Jan 29 15:05:17.241: INFO: Going to poll 192.168.2.72 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 29 15:05:17.244: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.2.72 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4535 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:05:17.244: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:05:17.245: INFO: ExecWithOptions: Clientset creation Jan 29 15:05:17.245: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-4535/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.2.72+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:05:18.318: INFO: Found all 1 expected endpoints: [netserver-2] Jan 29 15:05:18.318: INFO: Going to poll 192.168.6.66 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 29 15:05:18.322: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.6.66 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4535 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:05:18.323: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:05:18.323: INFO: ExecWithOptions: Clientset creation Jan 29 15:05:18.323: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-4535/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.6.66+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:05:19.394: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:19.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-4535" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":348,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:19.427: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ReplicationController �[1mSTEP�[0m: waiting for RC to be added �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: patching ReplicationController �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: patching ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: fetching ReplicationController status �[1mSTEP�[0m: patching ReplicationController scale �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for ReplicationController's scale to be the max amount �[1mSTEP�[0m: fetching ReplicationController; ensuring that it's patched �[1mSTEP�[0m: updating ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: listing all ReplicationControllers �[1mSTEP�[0m: checking that ReplicationController has expected values �[1mSTEP�[0m: deleting ReplicationControllers by collection �[1mSTEP�[0m: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:21.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-6736" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":24,"skipped":354,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:17.723: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-ec59e327-cd92-4d20-80cb-7d3825ffc5e9 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 29 15:05:17.756: INFO: Waiting up to 5m0s for pod "pod-secrets-a5ed741a-917f-4036-901c-0eaa250def83" in namespace "secrets-9826" to be "Succeeded or Failed" Jan 29 15:05:17.759: INFO: Pod "pod-secrets-a5ed741a-917f-4036-901c-0eaa250def83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664783ms Jan 29 15:05:19.764: INFO: Pod "pod-secrets-a5ed741a-917f-4036-901c-0eaa250def83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007210418s Jan 29 15:05:21.768: INFO: Pod "pod-secrets-a5ed741a-917f-4036-901c-0eaa250def83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011343922s �[1mSTEP�[0m: Saw pod success Jan 29 15:05:21.768: INFO: Pod "pod-secrets-a5ed741a-917f-4036-901c-0eaa250def83" satisfied condition "Succeeded or Failed" Jan 29 15:05:21.771: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-secrets-a5ed741a-917f-4036-901c-0eaa250def83 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:05:21.786: INFO: Waiting for pod pod-secrets-a5ed741a-917f-4036-901c-0eaa250def83 to disappear Jan 29 15:05:21.789: INFO: Pod pod-secrets-a5ed741a-917f-4036-901c-0eaa250def83 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:21.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9826" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":814,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:03:53.064: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-4513 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a new StatefulSet Jan 29 15:03:53.107: INFO: Found 0 stateful pods, waiting for 3 Jan 29 15:04:03.112: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 29 15:04:03.112: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 29 15:04:03.112: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 29 15:04:03.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4513 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 29 15:04:03.300: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 29 15:04:03.300: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 29 15:04:03.300: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' �[1mSTEP�[0m: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Jan 29 15:04:13.339: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Updating Pods in reverse ordinal order Jan 29 15:04:23.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4513 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 29 15:04:23.542: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 29 15:04:23.542: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 29 15:04:23.542: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' �[1mSTEP�[0m: Rolling back to a previous revision Jan 29 15:04:43.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4513 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 29 15:04:43.782: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 29 15:04:43.782: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 29 15:04:43.782: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 29 15:04:53.816: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Rolling back update in reverse ordinal order Jan 29 15:05:03.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4513 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 29 15:05:04.001: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 29 15:05:04.001: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 29 15:05:04.001: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 29 15:05:14.029: INFO: Deleting all statefulset in ns statefulset-4513 Jan 29 15:05:14.032: INFO: Scaling statefulset ss2 to 0 Jan 29 15:05:24.049: INFO: Waiting for statefulset status.replicas updated to 0 Jan 29 15:05:24.052: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:24.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-4513" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":11,"skipped":175,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:21.624: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on tmpfs Jan 29 15:05:21.651: INFO: Waiting up to 5m0s for pod "pod-866953b4-02d0-40d9-9a5c-56a7915a851f" in namespace "emptydir-5331" to be "Succeeded or Failed" Jan 29 15:05:21.653: INFO: Pod "pod-866953b4-02d0-40d9-9a5c-56a7915a851f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.602622ms Jan 29 15:05:23.657: INFO: Pod "pod-866953b4-02d0-40d9-9a5c-56a7915a851f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006640109s Jan 29 15:05:25.662: INFO: Pod "pod-866953b4-02d0-40d9-9a5c-56a7915a851f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011343002s �[1mSTEP�[0m: Saw pod success Jan 29 15:05:25.662: INFO: Pod "pod-866953b4-02d0-40d9-9a5c-56a7915a851f" satisfied condition "Succeeded or Failed" Jan 29 15:05:25.665: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-866953b4-02d0-40d9-9a5c-56a7915a851f container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:05:25.682: INFO: Waiting for pod pod-866953b4-02d0-40d9-9a5c-56a7915a851f to disappear Jan 29 15:05:25.685: INFO: Pod pod-866953b4-02d0-40d9-9a5c-56a7915a851f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:25.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5331" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":420,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:21.822: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-b2c5dce1-1107-4fd8-b5c7-7b024538c638 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 29 15:05:21.863: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8d728fa0-5753-427a-89a7-953524529ce2" in namespace "projected-3950" to be "Succeeded or Failed" Jan 29 15:05:21.865: INFO: Pod "pod-projected-configmaps-8d728fa0-5753-427a-89a7-953524529ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.863387ms Jan 29 15:05:23.871: INFO: Pod "pod-projected-configmaps-8d728fa0-5753-427a-89a7-953524529ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008220662s Jan 29 15:05:25.876: INFO: Pod "pod-projected-configmaps-8d728fa0-5753-427a-89a7-953524529ce2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013270656s �[1mSTEP�[0m: Saw pod success Jan 29 15:05:25.876: INFO: Pod "pod-projected-configmaps-8d728fa0-5753-427a-89a7-953524529ce2" satisfied condition "Succeeded or Failed" Jan 29 15:05:25.883: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-projected-configmaps-8d728fa0-5753-427a-89a7-953524529ce2 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:05:25.897: INFO: Waiting for pod pod-projected-configmaps-8d728fa0-5753-427a-89a7-953524529ce2 to disappear Jan 29 15:05:25.900: INFO: Pod pod-projected-configmaps-8d728fa0-5753-427a-89a7-953524529ce2 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:25.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3950" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":832,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:25.915: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:05:25.962: INFO: The status of Pod pod-secrets-2589accb-f04b-4acd-9041-37d7b8b8e471 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:27.969: INFO: The status of Pod pod-secrets-2589accb-f04b-4acd-9041-37d7b8b8e471 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:29.965: INFO: The status of Pod pod-secrets-2589accb-f04b-4acd-9041-37d7b8b8e471 is Running (Ready = true) �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:29.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-8383" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":36,"skipped":833,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:25.704: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-5fdeb5cf-d077-475e-8c88-6f3a9d8610a1 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 29 15:05:25.753: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7409081-e954-4f69-bd19-d167042a5dd8" in namespace "configmap-7455" to be "Succeeded or Failed" Jan 29 15:05:25.763: INFO: Pod "pod-configmaps-b7409081-e954-4f69-bd19-d167042a5dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406734ms Jan 29 15:05:27.768: INFO: Pod "pod-configmaps-b7409081-e954-4f69-bd19-d167042a5dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015570001s Jan 29 15:05:29.773: INFO: Pod "pod-configmaps-b7409081-e954-4f69-bd19-d167042a5dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019787693s Jan 29 15:05:31.777: INFO: Pod "pod-configmaps-b7409081-e954-4f69-bd19-d167042a5dd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024240988s �[1mSTEP�[0m: Saw pod success Jan 29 15:05:31.777: INFO: Pod "pod-configmaps-b7409081-e954-4f69-bd19-d167042a5dd8" satisfied condition "Succeeded or Failed" Jan 29 15:05:31.780: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-configmaps-b7409081-e954-4f69-bd19-d167042a5dd8 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:05:31.793: INFO: Waiting for pod pod-configmaps-b7409081-e954-4f69-bd19-d167042a5dd8 to disappear Jan 29 15:05:31.796: INFO: Pod pod-configmaps-b7409081-e954-4f69-bd19-d167042a5dd8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:31.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-7455" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":427,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:04:34.642: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-3370 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a new StatefulSet Jan 29 15:04:34.683: INFO: Found 0 stateful pods, waiting for 3 Jan 29 15:04:44.690: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 29 15:04:44.690: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 29 15:04:44.690: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Jan 29 15:04:44.720: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Not applying an update when the partition is greater than the number of replicas �[1mSTEP�[0m: Performing a canary update Jan 29 15:04:54.756: INFO: Updating stateful set ss2 Jan 29 15:04:54.761: INFO: Waiting for Pod statefulset-3370/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb �[1mSTEP�[0m: Restoring Pods to the correct revision when they are deleted Jan 29 15:05:04.804: INFO: Found 1 stateful pods, waiting for 3 Jan 29 15:05:14.809: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 29 15:05:14.809: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 29 15:05:14.809: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Performing a phased rolling update Jan 29 15:05:14.835: INFO: Updating stateful set ss2 Jan 29 15:05:14.846: INFO: Waiting for Pod statefulset-3370/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 29 15:05:24.881: INFO: Updating stateful set ss2 Jan 29 15:05:24.904: INFO: Waiting for StatefulSet statefulset-3370/ss2 to complete update Jan 29 15:05:24.904: INFO: Waiting for Pod statefulset-3370/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 29 15:05:34.912: INFO: Deleting all statefulset in ns statefulset-3370 Jan 29 15:05:34.914: INFO: Scaling statefulset ss2 to 0 Jan 29 15:05:44.929: INFO: Waiting for statefulset status.replicas updated to 0 Jan 29 15:05:44.931: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:44.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-3370" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":10,"skipped":212,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:30.091: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Jan 29 15:05:30.099: INFO: Namespace name "pod-network-test-4535" was already taken, generate a new name and retry �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-6029 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 29 15:05:32.118: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 29 15:05:32.174: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:34.178: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 29 15:05:36.179: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 29 15:05:38.178: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 29 15:05:40.179: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 29 15:05:42.180: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 29 15:05:44.178: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 29 15:05:44.184: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 29 15:05:44.190: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 29 15:05:44.196: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 29 15:05:46.223: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 29 15:05:46.223: INFO: Breadth first check of 192.168.0.94 on host 172.18.0.4... Jan 29 15:05:46.229: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.83:9080/dial?request=hostname&protocol=udp&host=192.168.0.94&port=8081&tries=1'] Namespace:pod-network-test-6029 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:05:46.229: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:05:46.230: INFO: ExecWithOptions: Clientset creation Jan 29 15:05:46.230: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6029/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.83%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.0.94%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:05:46.320: INFO: Waiting for responses: map[] Jan 29 15:05:46.320: INFO: reached 192.168.0.94 after 0/1 tries Jan 29 15:05:46.320: INFO: Breadth first check of 192.168.1.68 on host 172.18.0.6... Jan 29 15:05:46.325: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.83:9080/dial?request=hostname&protocol=udp&host=192.168.1.68&port=8081&tries=1'] Namespace:pod-network-test-6029 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:05:46.325: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:05:46.325: INFO: ExecWithOptions: Clientset creation Jan 29 15:05:46.326: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6029/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.83%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.1.68%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:05:46.411: INFO: Waiting for responses: map[] Jan 29 15:05:46.411: INFO: reached 192.168.1.68 after 0/1 tries Jan 29 15:05:46.411: INFO: Breadth first check of 192.168.2.81 on host 172.18.0.7... Jan 29 15:05:46.414: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.83:9080/dial?request=hostname&protocol=udp&host=192.168.2.81&port=8081&tries=1'] Namespace:pod-network-test-6029 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:05:46.414: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:05:46.414: INFO: ExecWithOptions: Clientset creation Jan 29 15:05:46.415: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6029/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.83%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.2.81%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:05:46.491: INFO: Waiting for responses: map[] Jan 29 15:05:46.491: INFO: reached 192.168.2.81 after 0/1 tries Jan 29 15:05:46.491: INFO: Breadth first check of 192.168.6.69 on host 172.18.0.5... Jan 29 15:05:46.497: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.83:9080/dial?request=hostname&protocol=udp&host=192.168.6.69&port=8081&tries=1'] Namespace:pod-network-test-6029 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:05:46.497: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:05:46.498: INFO: ExecWithOptions: Clientset creation Jan 29 15:05:46.498: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6029/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.83%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.6.69%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:05:46.603: INFO: Waiting for responses: map[] Jan 29 15:05:46.603: INFO: reached 192.168.6.69 after 0/1 tries Jan 29 15:05:46.603: INFO: Going to retry 0 out of 4 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:46.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-6029" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":900,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:44.965: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:05:44.986: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 29 15:05:47.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3423 --namespace=crd-publish-openapi-3423 create -f -' Jan 29 15:05:48.259: INFO: stderr: "" Jan 29 15:05:48.259: INFO: stdout: "e2e-test-crd-publish-openapi-8192-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 29 15:05:48.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3423 --namespace=crd-publish-openapi-3423 delete e2e-test-crd-publish-openapi-8192-crds test-cr' Jan 29 15:05:48.335: INFO: stderr: "" Jan 29 15:05:48.335: INFO: stdout: "e2e-test-crd-publish-openapi-8192-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 29 15:05:48.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3423 --namespace=crd-publish-openapi-3423 apply -f -' Jan 29 15:05:48.540: INFO: stderr: "" Jan 29 15:05:48.540: INFO: stdout: "e2e-test-crd-publish-openapi-8192-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 29 15:05:48.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3423 --namespace=crd-publish-openapi-3423 delete e2e-test-crd-publish-openapi-8192-crds test-cr' Jan 29 15:05:48.618: INFO: stderr: "" Jan 29 15:05:48.618: INFO: stdout: "e2e-test-crd-publish-openapi-8192-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR without validation schema Jan 29 15:05:48.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3423 explain e2e-test-crd-publish-openapi-8192-crds' Jan 29 15:05:48.817: INFO: stderr: "" Jan 29 15:05:48.817: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8192-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:51.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-3423" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":11,"skipped":223,"failed":0} [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:51.077: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 29 15:05:51.117: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Jan 29 15:05:51.121: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 29 15:05:51.136: INFO: waiting for watch events with expected annotations Jan 29 15:05:51.136: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:51.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-5147" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":12,"skipped":223,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:31.950: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-9841 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating statefulset ss in namespace statefulset-9841 Jan 29 15:05:31.984: INFO: Found 0 stateful pods, waiting for 1 Jan 29 15:05:41.989: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the statefulset Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource �[1mSTEP�[0m: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 29 15:05:42.016: INFO: Deleting all statefulset in ns statefulset-9841 Jan 29 15:05:42.020: INFO: Scaling statefulset ss to 0 Jan 29 15:05:52.036: INFO: Waiting for statefulset status.replicas updated to 0 Jan 29 15:05:52.040: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:52.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-9841" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":27,"skipped":527,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:46.650: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 �[1mSTEP�[0m: creating an pod Jan 29 15:05:46.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4536 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.39 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 29 15:05:46.749: INFO: stderr: "" Jan 29 15:05:46.749: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for log generator to start. Jan 29 15:05:46.749: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 29 15:05:46.749: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4536" to be "running and ready, or succeeded" Jan 29 15:05:46.752: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.25397ms Jan 29 15:05:48.757: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.007958631s Jan 29 15:05:48.757: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 29 15:05:48.757: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Jan 29 15:05:48.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4536 logs logs-generator logs-generator' Jan 29 15:05:48.895: INFO: stderr: "" Jan 29 15:05:48.895: INFO: stdout: "I0129 15:05:47.419187 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/k9fg 511\nI0129 15:05:47.619425 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/lgrw 290\nI0129 15:05:47.819782 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/hsh 232\nI0129 15:05:48.020246 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/c8r 516\nI0129 15:05:48.219630 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/bhq 499\nI0129 15:05:48.420044 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/ccc 342\nI0129 15:05:48.619315 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/qq6 487\nI0129 15:05:48.819717 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/7n8 435\n" �[1mSTEP�[0m: limiting log lines Jan 29 15:05:48.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4536 logs logs-generator logs-generator --tail=1' Jan 29 15:05:48.978: INFO: stderr: "" Jan 29 15:05:48.978: INFO: stdout: "I0129 15:05:48.819717 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/7n8 435\n" Jan 29 15:05:48.978: INFO: got output "I0129 15:05:48.819717 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/7n8 435\n" �[1mSTEP�[0m: limiting log bytes Jan 29 15:05:48.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4536 logs logs-generator logs-generator --limit-bytes=1' Jan 29 15:05:49.058: INFO: stderr: "" Jan 29 15:05:49.058: INFO: stdout: "I" Jan 29 15:05:49.058: INFO: got output "I" �[1mSTEP�[0m: exposing timestamps Jan 29 15:05:49.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4536 logs logs-generator logs-generator --tail=1 --timestamps' Jan 29 15:05:49.144: INFO: stderr: "" Jan 29 15:05:49.144: INFO: stdout: "2023-01-29T15:05:49.020345611Z I0129 15:05:49.020138 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/2w4 216\n" Jan 29 15:05:49.144: INFO: got output "2023-01-29T15:05:49.020345611Z I0129 15:05:49.020138 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/2w4 216\n" �[1mSTEP�[0m: restricting to a time range Jan 29 15:05:51.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4536 logs logs-generator logs-generator --since=1s' Jan 29 15:05:51.786: INFO: stderr: "" Jan 29 15:05:51.786: INFO: stdout: "I0129 15:05:50.820235 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/6j9v 328\nI0129 15:05:51.019653 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/tlq 377\nI0129 15:05:51.220110 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/8pv 454\nI0129 15:05:51.419319 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/p24d 293\nI0129 15:05:51.619699 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/zs8 599\n" Jan 29 15:05:51.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4536 logs logs-generator logs-generator --since=24h' Jan 29 15:05:51.940: INFO: stderr: "" Jan 29 15:05:51.940: INFO: stdout: "I0129 15:05:47.419187 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/k9fg 511\nI0129 15:05:47.619425 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/lgrw 290\nI0129 15:05:47.819782 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/hsh 232\nI0129 15:05:48.020246 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/c8r 516\nI0129 15:05:48.219630 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/bhq 499\nI0129 15:05:48.420044 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/ccc 342\nI0129 15:05:48.619315 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/qq6 487\nI0129 15:05:48.819717 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/7n8 435\nI0129 15:05:49.020138 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/2w4 216\nI0129 15:05:49.219466 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/hl9k 337\nI0129 15:05:49.419974 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/cr9 554\nI0129 15:05:49.619333 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/pn9 589\nI0129 15:05:49.819823 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/clw 523\nI0129 15:05:50.019335 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/6zdr 502\nI0129 15:05:50.219850 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/6qx 430\nI0129 15:05:50.420316 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/9t8t 370\nI0129 15:05:50.619774 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/q4cq 413\nI0129 15:05:50.820235 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/6j9v 328\nI0129 15:05:51.019653 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/tlq 377\nI0129 15:05:51.220110 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/8pv 454\nI0129 15:05:51.419319 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/p24d 293\nI0129 15:05:51.619699 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/zs8 599\nI0129 15:05:51.821491 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/skw 286\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 Jan 29 15:05:51.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4536 delete pod logs-generator' Jan 29 15:05:53.197: INFO: stderr: "" Jan 29 15:05:53.197: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:53.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4536" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":38,"skipped":921,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:52.090: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating pod Jan 29 15:05:52.130: INFO: The status of Pod pod-hostip-ead29e64-53a8-48df-ad6f-1698a48d703f is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:54.138: INFO: The status of Pod pod-hostip-ead29e64-53a8-48df-ad6f-1698a48d703f is Running (Ready = true) Jan 29 15:05:54.149: INFO: Pod pod-hostip-ead29e64-53a8-48df-ad6f-1698a48d703f has hostIP: 172.18.0.4 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:54.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-3694" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":534,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:51.210: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Jan 29 15:05:51.236: INFO: Waiting up to 5m0s for pod "pod-4691e55c-6d60-496f-ad10-67468a6c56d2" in namespace "emptydir-8582" to be "Succeeded or Failed" Jan 29 15:05:51.239: INFO: Pod "pod-4691e55c-6d60-496f-ad10-67468a6c56d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.58271ms Jan 29 15:05:53.244: INFO: Pod "pod-4691e55c-6d60-496f-ad10-67468a6c56d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007392353s Jan 29 15:05:55.251: INFO: Pod "pod-4691e55c-6d60-496f-ad10-67468a6c56d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014263281s �[1mSTEP�[0m: Saw pod success Jan 29 15:05:55.251: INFO: Pod "pod-4691e55c-6d60-496f-ad10-67468a6c56d2" satisfied condition "Succeeded or Failed" Jan 29 15:05:55.253: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-4691e55c-6d60-496f-ad10-67468a6c56d2 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:05:55.272: INFO: Waiting for pod pod-4691e55c-6d60-496f-ad10-67468a6c56d2 to disappear Jan 29 15:05:55.274: INFO: Pod pod-4691e55c-6d60-496f-ad10-67468a6c56d2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:55.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-8582" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":240,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:54.264: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-aeb72d52-20f2-4081-970e-391df138457d �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 29 15:05:54.305: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f46407d-5878-49eb-98da-0c3a04714a82" in namespace "configmap-4479" to be "Succeeded or Failed" Jan 29 15:05:54.311: INFO: Pod "pod-configmaps-7f46407d-5878-49eb-98da-0c3a04714a82": Phase="Pending", Reason="", readiness=false. Elapsed: 5.9608ms Jan 29 15:05:56.315: INFO: Pod "pod-configmaps-7f46407d-5878-49eb-98da-0c3a04714a82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009406323s Jan 29 15:05:58.319: INFO: Pod "pod-configmaps-7f46407d-5878-49eb-98da-0c3a04714a82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013792192s �[1mSTEP�[0m: Saw pod success Jan 29 15:05:58.319: INFO: Pod "pod-configmaps-7f46407d-5878-49eb-98da-0c3a04714a82" satisfied condition "Succeeded or Failed" Jan 29 15:05:58.323: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-configmaps-7f46407d-5878-49eb-98da-0c3a04714a82 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:05:58.337: INFO: Waiting for pod pod-configmaps-7f46407d-5878-49eb-98da-0c3a04714a82 to disappear Jan 29 15:05:58.340: INFO: Pod pod-configmaps-7f46407d-5878-49eb-98da-0c3a04714a82 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:58.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4479" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":575,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:55.332: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 29 15:05:55.374: INFO: Waiting up to 5m0s for pod "pod-440b88a1-26f5-419e-a5c2-a150c70b9050" in namespace "emptydir-5662" to be "Succeeded or Failed" Jan 29 15:05:55.377: INFO: Pod "pod-440b88a1-26f5-419e-a5c2-a150c70b9050": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627111ms Jan 29 15:05:57.381: INFO: Pod "pod-440b88a1-26f5-419e-a5c2-a150c70b9050": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006731067s Jan 29 15:05:59.385: INFO: Pod "pod-440b88a1-26f5-419e-a5c2-a150c70b9050": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011212432s �[1mSTEP�[0m: Saw pod success Jan 29 15:05:59.385: INFO: Pod "pod-440b88a1-26f5-419e-a5c2-a150c70b9050" satisfied condition "Succeeded or Failed" Jan 29 15:05:59.389: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-440b88a1-26f5-419e-a5c2-a150c70b9050 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:05:59.403: INFO: Waiting for pod pod-440b88a1-26f5-419e-a5c2-a150c70b9050 to disappear Jan 29 15:05:59.408: INFO: Pod pod-440b88a1-26f5-419e-a5c2-a150c70b9050 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:59.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5662" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":275,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:53.245: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 29 15:05:53.680: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 29 15:05:56.700: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:05:56.703: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:05:59.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-402" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":39,"skipped":942,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:59.918: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename certificates �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/certificates.k8s.io �[1mSTEP�[0m: getting /apis/certificates.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 29 15:06:01.135: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 29 15:06:01.150: INFO: waiting for watch events with expected annotations Jan 29 15:06:01.151: INFO: saw patched and updated annotations �[1mSTEP�[0m: getting /approval �[1mSTEP�[0m: patching /approval �[1mSTEP�[0m: updating /approval �[1mSTEP�[0m: getting /status �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:01.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "certificates-5623" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:59.451: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4404.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4404.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4404.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4404.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 29 15:06:07.530: INFO: DNS probes using dns-4404/dns-test-5410b868-e580-44b0-8f17-ba7f6a03b39b succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:07.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-4404" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":292,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:07.584: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: validating cluster-info Jan 29 15:06:07.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4213 cluster-info' Jan 29 15:06:07.686: INFO: stderr: "" Jan 29 15:06:07.686: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.18.0.3:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:07.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4213" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":40,"skipped":953,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:01.237: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-6896 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-6896 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-6896 I0129 15:06:01.334252 17 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-6896, replica count: 3 I0129 15:06:04.385125 17 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 29 15:06:04.392: INFO: Creating new exec pod Jan 29 15:06:07.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6896 exec execpod-affinityvpd8b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 29 15:06:07.573: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 29 15:06:07.573: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 29 15:06:07.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6896 exec execpod-affinityvpd8b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.136.133.103 80' Jan 29 15:06:07.751: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.136.133.103 80\nConnection to 10.136.133.103 80 port [tcp/http] succeeded!\n" Jan 29 15:06:07.751: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 29 15:06:07.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6896 exec execpod-affinityvpd8b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.136.133.103:80/ ; done' Jan 29 15:06:07.997: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.133.103:80/\n" Jan 29 15:06:07.997: INFO: stdout: "\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d\naffinity-clusterip-fht9d" Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Received response from host: affinity-clusterip-fht9d Jan 29 15:06:07.997: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-6896, will wait for the garbage collector to delete the pods Jan 29 15:06:08.068: INFO: Deleting ReplicationController affinity-clusterip took: 5.925235ms Jan 29 15:06:08.168: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.832498ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:10.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6896" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":41,"skipped":953,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:10.114: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on node default medium Jan 29 15:06:10.142: INFO: Waiting up to 5m0s for pod "pod-00847c84-fa16-4341-8f18-4b0c64ce327a" in namespace "emptydir-21" to be "Succeeded or Failed" Jan 29 15:06:10.145: INFO: Pod "pod-00847c84-fa16-4341-8f18-4b0c64ce327a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.033761ms Jan 29 15:06:12.151: INFO: Pod "pod-00847c84-fa16-4341-8f18-4b0c64ce327a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008990217s Jan 29 15:06:14.155: INFO: Pod "pod-00847c84-fa16-4341-8f18-4b0c64ce327a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012681239s �[1mSTEP�[0m: Saw pod success Jan 29 15:06:14.155: INFO: Pod "pod-00847c84-fa16-4341-8f18-4b0c64ce327a" satisfied condition "Succeeded or Failed" Jan 29 15:06:14.158: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod pod-00847c84-fa16-4341-8f18-4b0c64ce327a container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:06:14.177: INFO: Waiting for pod pod-00847c84-fa16-4341-8f18-4b0c64ce327a to disappear Jan 29 15:06:14.180: INFO: Pod pod-00847c84-fa16-4341-8f18-4b0c64ce327a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:14.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-21" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":970,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":16,"skipped":307,"failed":0} [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:07.697: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: Orphaning one of the Job's Pods Jan 29 15:06:10.244: INFO: Successfully updated pod "adopt-release-9x4bm" �[1mSTEP�[0m: Checking that the Job readopts the Pod Jan 29 15:06:10.244: INFO: Waiting up to 15m0s for pod "adopt-release-9x4bm" in namespace "job-3739" to be "adopted" Jan 29 15:06:10.248: INFO: Pod "adopt-release-9x4bm": Phase="Running", Reason="", readiness=true. Elapsed: 3.991295ms Jan 29 15:06:12.252: INFO: Pod "adopt-release-9x4bm": Phase="Running", Reason="", readiness=true. Elapsed: 2.008529491s Jan 29 15:06:12.252: INFO: Pod "adopt-release-9x4bm" satisfied condition "adopted" �[1mSTEP�[0m: Removing the labels from the Job's Pod Jan 29 15:06:12.766: INFO: Successfully updated pod "adopt-release-9x4bm" �[1mSTEP�[0m: Checking that the Job releases the Pod Jan 29 15:06:12.766: INFO: Waiting up to 15m0s for pod "adopt-release-9x4bm" in namespace "job-3739" to be "released" Jan 29 15:06:12.768: INFO: Pod "adopt-release-9x4bm": Phase="Running", Reason="", readiness=true. Elapsed: 2.433284ms Jan 29 15:06:14.773: INFO: Pod "adopt-release-9x4bm": Phase="Running", Reason="", readiness=true. Elapsed: 2.007180484s Jan 29 15:06:14.773: INFO: Pod "adopt-release-9x4bm" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:14.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-3739" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":17,"skipped":307,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:14.202: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 29 15:06:14.233: INFO: The status of Pod annotationupdate0a55f5b4-4d14-44cf-ae05-e44996a8d0f5 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:16.238: INFO: The status of Pod annotationupdate0a55f5b4-4d14-44cf-ae05-e44996a8d0f5 is Running (Ready = true) Jan 29 15:06:16.758: INFO: Successfully updated pod "annotationupdate0a55f5b4-4d14-44cf-ae05-e44996a8d0f5" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:20.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5394" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":978,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:14.823: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-500 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-500 I0129 15:06:14.878210 15 runners.go:193] Created replication controller with name: externalname-service, namespace: services-500, replica count: 2 I0129 15:06:17.929408 15 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 29 15:06:17.929: INFO: Creating new exec pod Jan 29 15:06:20.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-500 exec execpodzp4hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 29 15:06:21.107: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 29 15:06:21.107: INFO: stdout: "externalname-service-4fz29" Jan 29 15:06:21.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-500 exec execpodzp4hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.131.175.73 80' Jan 29 15:06:21.268: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.131.175.73 80\nConnection to 10.131.175.73 80 port [tcp/http] succeeded!\n" Jan 29 15:06:21.268: INFO: stdout: "externalname-service-4fz29" Jan 29 15:06:21.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-500 exec execpodzp4hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31097' Jan 29 15:06:21.412: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 31097\nConnection to 172.18.0.4 31097 port [tcp/*] succeeded!\n" Jan 29 15:06:21.412: INFO: stdout: "" Jan 29 15:06:22.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-500 exec execpodzp4hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31097' Jan 29 15:06:24.559: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 31097\nConnection to 172.18.0.4 31097 port [tcp/*] succeeded!\n" Jan 29 15:06:24.559: INFO: stdout: "" Jan 29 15:06:25.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-500 exec execpodzp4hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31097' Jan 29 15:06:27.559: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 31097\nConnection to 172.18.0.4 31097 port [tcp/*] succeeded!\n" Jan 29 15:06:27.559: INFO: stdout: "" Jan 29 15:06:28.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-500 exec execpodzp4hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31097' Jan 29 15:06:30.560: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 31097\nConnection to 172.18.0.4 31097 port [tcp/*] succeeded!\n" Jan 29 15:06:30.560: INFO: stdout: "" Jan 29 15:06:31.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-500 exec execpodzp4hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31097' Jan 29 15:06:31.549: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 31097\nConnection to 172.18.0.4 31097 port [tcp/*] succeeded!\n" Jan 29 15:06:31.549: INFO: stdout: "externalname-service-7924m" Jan 29 15:06:31.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-500 exec execpodzp4hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 31097' Jan 29 15:06:31.702: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 31097\nConnection to 172.18.0.7 31097 port [tcp/*] succeeded!\n" Jan 29 15:06:31.702: INFO: stdout: "externalname-service-7924m" Jan 29 15:06:31.702: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:31.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-500" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":18,"skipped":334,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:58.371: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: delete a job �[1mSTEP�[0m: deleting Job.batch foo in namespace job-3772, will wait for the garbage collector to delete the pods Jan 29 15:06:00.462: INFO: Deleting Job.batch foo took: 8.016284ms Jan 29 15:06:00.563: INFO: Terminating Job.batch foo pods took: 100.966554ms �[1mSTEP�[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:33.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-3772" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":30,"skipped":589,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:31.754: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Failed �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 29 15:06:35.831: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:35.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-629" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":335,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:35.912: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:06:35.935: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 29 15:06:35.949: INFO: The status of Pod pod-logs-websocket-c4171d6d-657e-4a32-8429-042cbc756cfa is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:37.957: INFO: The status of Pod pod-logs-websocket-c4171d6d-657e-4a32-8429-042cbc756cfa is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:37.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-646" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":370,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:38.024: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 29 15:06:38.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a421e3f-4800-4213-9164-333c8b13cc80" in namespace "downward-api-1913" to be "Succeeded or Failed" Jan 29 15:06:38.103: INFO: Pod "downwardapi-volume-1a421e3f-4800-4213-9164-333c8b13cc80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409152ms Jan 29 15:06:40.107: INFO: Pod "downwardapi-volume-1a421e3f-4800-4213-9164-333c8b13cc80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008568703s Jan 29 15:06:42.111: INFO: Pod "downwardapi-volume-1a421e3f-4800-4213-9164-333c8b13cc80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01238566s �[1mSTEP�[0m: Saw pod success Jan 29 15:06:42.111: INFO: Pod "downwardapi-volume-1a421e3f-4800-4213-9164-333c8b13cc80" satisfied condition "Succeeded or Failed" Jan 29 15:06:42.114: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527 pod downwardapi-volume-1a421e3f-4800-4213-9164-333c8b13cc80 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:06:42.137: INFO: Waiting for pod downwardapi-volume-1a421e3f-4800-4213-9164-333c8b13cc80 to disappear Jan 29 15:06:42.140: INFO: Pod downwardapi-volume-1a421e3f-4800-4213-9164-333c8b13cc80 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:42.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1913" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":386,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:42.229: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 29 15:06:45.277: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:45.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-3779" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":438,"failed":0} [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:45.300: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a ReplicationController is created �[1mSTEP�[0m: When the matched label of one of its pods change Jan 29 15:06:45.333: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 29 15:06:50.339: INFO: Pod name pod-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:51.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-9918" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":23,"skipped":438,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:51.370: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:06:51.389: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 29 15:06:53.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6284 --namespace=crd-publish-openapi-6284 create -f -' Jan 29 15:06:54.480: INFO: stderr: "" Jan 29 15:06:54.480: INFO: stdout: "e2e-test-crd-publish-openapi-6031-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 29 15:06:54.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6284 --namespace=crd-publish-openapi-6284 delete e2e-test-crd-publish-openapi-6031-crds test-cr' Jan 29 15:06:54.556: INFO: stderr: "" Jan 29 15:06:54.556: INFO: stdout: "e2e-test-crd-publish-openapi-6031-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 29 15:06:54.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6284 --namespace=crd-publish-openapi-6284 apply -f -' Jan 29 15:06:54.768: INFO: stderr: "" Jan 29 15:06:54.768: INFO: stdout: "e2e-test-crd-publish-openapi-6031-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 29 15:06:54.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6284 --namespace=crd-publish-openapi-6284 delete e2e-test-crd-publish-openapi-6031-crds test-cr' Jan 29 15:06:54.843: INFO: stderr: "" Jan 29 15:06:54.843: INFO: stdout: "e2e-test-crd-publish-openapi-6031-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 29 15:06:54.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6284 explain e2e-test-crd-publish-openapi-6031-crds' Jan 29 15:06:55.028: INFO: stderr: "" Jan 29 15:06:55.028: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6031-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:06:57.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-6284" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":24,"skipped":442,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:57.325: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 29 15:06:57.352: INFO: Waiting up to 5m0s for pod "downward-api-7a9bbdd3-8574-40b7-a01f-ef71e11918f6" in namespace "downward-api-8959" to be "Succeeded or Failed" Jan 29 15:06:57.356: INFO: Pod "downward-api-7a9bbdd3-8574-40b7-a01f-ef71e11918f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.363304ms Jan 29 15:06:59.359: INFO: Pod "downward-api-7a9bbdd3-8574-40b7-a01f-ef71e11918f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006934782s Jan 29 15:07:01.364: INFO: Pod "downward-api-7a9bbdd3-8574-40b7-a01f-ef71e11918f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01204734s �[1mSTEP�[0m: Saw pod success Jan 29 15:07:01.364: INFO: Pod "downward-api-7a9bbdd3-8574-40b7-a01f-ef71e11918f6" satisfied condition "Succeeded or Failed" Jan 29 15:07:01.367: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod downward-api-7a9bbdd3-8574-40b7-a01f-ef71e11918f6 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:07:01.384: INFO: Waiting for pod downward-api-7a9bbdd3-8574-40b7-a01f-ef71e11918f6 to disappear Jan 29 15:07:01.387: INFO: Pod downward-api-7a9bbdd3-8574-40b7-a01f-ef71e11918f6 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:07:01.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8959" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":446,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:33.191: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test externalName service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6791.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6791.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 29 15:06:41.235: INFO: DNS probes using dns-test-076e74ba-f837-48f5-aa63-ef35814a8600 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the externalName to bar.example.com �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6791.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6791.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a second pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 29 15:06:43.268: INFO: File wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local from pod dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 29 15:06:43.272: INFO: File jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local from pod dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 29 15:06:43.272: INFO: Lookups using dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 failed for: [wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local] Jan 29 15:06:48.279: INFO: File wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local from pod dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 29 15:06:48.282: INFO: File jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local from pod dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 29 15:06:48.282: INFO: Lookups using dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 failed for: [wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local] Jan 29 15:06:53.277: INFO: File wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local from pod dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 29 15:06:53.281: INFO: File jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local from pod dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 29 15:06:53.281: INFO: Lookups using dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 failed for: [wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local] Jan 29 15:06:58.278: INFO: File wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local from pod dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 29 15:06:58.282: INFO: File jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local from pod dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 29 15:06:58.282: INFO: Lookups using dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 failed for: [wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local] Jan 29 15:07:03.279: INFO: File wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local from pod dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 29 15:07:03.283: INFO: File jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local from pod dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 29 15:07:03.283: INFO: Lookups using dns-6791/dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 failed for: [wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local] Jan 29 15:07:08.279: INFO: DNS probes using dns-test-9860ee1a-c5ea-4e02-a6e2-9328c18beff5 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the service to type=ClusterIP �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6791.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6791.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6791.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6791.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a third pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 29 15:07:10.365: INFO: DNS probes using dns-test-72c75862-97f2-438f-b72d-11d0e268f341 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:07:10.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-6791" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":31,"skipped":600,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:07:10.469: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-cac184d6-7c76-426a-8216-e5ed198ab731 �[1mSTEP�[0m: Creating the pod Jan 29 15:07:10.528: INFO: The status of Pod pod-projected-configmaps-d778be34-2b7a-43c4-a9b5-15b5bd9ac664 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:12.533: INFO: The status of Pod pod-projected-configmaps-d778be34-2b7a-43c4-a9b5-15b5bd9ac664 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-cac184d6-7c76-426a-8216-e5ed198ab731 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:07:14.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4603" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":632,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:07:14.600: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should validate Deployment Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Deployment Jan 29 15:07:14.629: INFO: Creating simple deployment test-deployment-vdcpz Jan 29 15:07:14.642: INFO: deployment "test-deployment-vdcpz" doesn't have the required revision set �[1mSTEP�[0m: Getting /status Jan 29 15:07:16.659: INFO: Deployment test-deployment-vdcpz has Conditions: [{Available True 2023-01-29 15:07:16 +0000 UTC 2023-01-29 15:07:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-01-29 15:07:16 +0000 UTC 2023-01-29 15:07:14 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-vdcpz-764bc7c4b7" has successfully progressed.}] �[1mSTEP�[0m: updating Deployment Status Jan 29 15:07:16.668: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 7, 16, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 7, 16, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 29, 15, 7, 16, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 29, 15, 7, 14, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-vdcpz-764bc7c4b7\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Deployment status to be updated Jan 29 15:07:16.671: INFO: Observed &Deployment event: ADDED Jan 29 15:07:16.671: INFO: Observed Deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-29 15:07:14 +0000 UTC 2023-01-29 15:07:14 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-vdcpz-764bc7c4b7"} Jan 29 15:07:16.671: INFO: Observed &Deployment event: MODIFIED Jan 29 15:07:16.671: INFO: Observed Deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-29 15:07:14 +0000 UTC 2023-01-29 15:07:14 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-vdcpz-764bc7c4b7"} Jan 29 15:07:16.671: INFO: Observed Deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-29 15:07:14 +0000 UTC 2023-01-29 15:07:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 29 15:07:16.671: INFO: Observed &Deployment event: MODIFIED Jan 29 15:07:16.671: INFO: Observed Deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-29 15:07:14 +0000 UTC 2023-01-29 15:07:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 29 15:07:16.671: INFO: Observed Deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-29 15:07:14 +0000 UTC 2023-01-29 15:07:14 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-vdcpz-764bc7c4b7" is progressing.} Jan 29 15:07:16.671: INFO: Observed &Deployment event: MODIFIED Jan 29 15:07:16.671: INFO: Observed Deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-29 15:07:16 +0000 UTC 2023-01-29 15:07:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 29 15:07:16.671: INFO: Observed Deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-29 15:07:16 +0000 UTC 2023-01-29 15:07:14 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-vdcpz-764bc7c4b7" has successfully progressed.} Jan 29 15:07:16.672: INFO: Observed &Deployment event: MODIFIED Jan 29 15:07:16.672: INFO: Observed Deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-29 15:07:16 +0000 UTC 2023-01-29 15:07:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 29 15:07:16.672: INFO: Observed Deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-29 15:07:16 +0000 UTC 2023-01-29 15:07:14 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-vdcpz-764bc7c4b7" has successfully progressed.} Jan 29 15:07:16.672: INFO: Found Deployment test-deployment-vdcpz in namespace deployment-3952 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Jan 29 15:07:16.672: INFO: Deployment test-deployment-vdcpz has an updated status �[1mSTEP�[0m: patching the Statefulset Status Jan 29 15:07:16.672: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Jan 29 15:07:16.679: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Deployment status to be patched Jan 29 15:07:16.681: INFO: Observed &Deployment event: ADDED Jan 29 15:07:16.682: INFO: Observed deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-29 15:07:14 +0000 UTC 2023-01-29 15:07:14 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-vdcpz-764bc7c4b7"} Jan 29 15:07:16.682: INFO: Observed &Deployment event: MODIFIED Jan 29 15:07:16.682: INFO: Observed deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-29 15:07:14 +0000 UTC 2023-01-29 15:07:14 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-vdcpz-764bc7c4b7"} Jan 29 15:07:16.682: INFO: Observed deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-29 15:07:14 +0000 UTC 2023-01-29 15:07:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 29 15:07:16.682: INFO: Observed &Deployment event: MODIFIED Jan 29 15:07:16.682: INFO: Observed deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-29 15:07:14 +0000 UTC 2023-01-29 15:07:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 29 15:07:16.682: INFO: Observed deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-29 15:07:14 +0000 UTC 2023-01-29 15:07:14 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-vdcpz-764bc7c4b7" is progressing.} Jan 29 15:07:16.683: INFO: Observed &Deployment event: MODIFIED Jan 29 15:07:16.683: INFO: Observed deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-29 15:07:16 +0000 UTC 2023-01-29 15:07:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 29 15:07:16.683: INFO: Observed deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-29 15:07:16 +0000 UTC 2023-01-29 15:07:14 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-vdcpz-764bc7c4b7" has successfully progressed.} Jan 29 15:07:16.683: INFO: Observed &Deployment event: MODIFIED Jan 29 15:07:16.683: INFO: Observed deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-29 15:07:16 +0000 UTC 2023-01-29 15:07:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 29 15:07:16.683: INFO: Observed deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-29 15:07:16 +0000 UTC 2023-01-29 15:07:14 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-vdcpz-764bc7c4b7" has successfully progressed.} Jan 29 15:07:16.683: INFO: Observed deployment test-deployment-vdcpz in namespace deployment-3952 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Jan 29 15:07:16.683: INFO: Observed &Deployment event: MODIFIED Jan 29 15:07:16.683: INFO: Found deployment test-deployment-vdcpz in namespace deployment-3952 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } Jan 29 15:07:16.683: INFO: Deployment test-deployment-vdcpz has a patched status [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 29 15:07:16.687: INFO: Deployment "test-deployment-vdcpz": &Deployment{ObjectMeta:{test-deployment-vdcpz deployment-3952 9a98e9c3-be45-46b9-b0ae-861557f744a7 12938 1 2023-01-29 15:07:14 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-01-29 15:07:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2023-01-29 15:07:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2023-01-29 15:07:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0027dc398 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-vdcpz-764bc7c4b7",LastUpdateTime:2023-01-29 15:07:16 +0000 UTC,LastTransitionTime:2023-01-29 15:07:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 29 15:07:16.690: INFO: New ReplicaSet "test-deployment-vdcpz-764bc7c4b7" of Deployment "test-deployment-vdcpz": &ReplicaSet{ObjectMeta:{test-deployment-vdcpz-764bc7c4b7 deployment-3952 5ae09209-bced-4674-bf21-44fea0384bb2 12931 1 2023-01-29 15:07:14 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-vdcpz 9a98e9c3-be45-46b9-b0ae-861557f744a7 0xc0027dc790 0xc0027dc791}] [] [{kube-controller-manager Update apps/v1 2023-01-29 15:07:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a98e9c3-be45-46b9-b0ae-861557f744a7\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:07:16 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 764bc7c4b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0027dc838 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 29 15:07:16.693: INFO: Pod "test-deployment-vdcpz-764bc7c4b7-wzngw" is available: &Pod{ObjectMeta:{test-deployment-vdcpz-764bc7c4b7-wzngw test-deployment-vdcpz-764bc7c4b7- deployment-3952 0dfedbf1-d55a-43d2-92a6-fb0af1cb3996 12930 0 2023-01-29 15:07:14 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [{apps/v1 ReplicaSet test-deployment-vdcpz-764bc7c4b7 5ae09209-bced-4674-bf21-44fea0384bb2 0xc003cfe080 0xc003cfe081}] [] [{kube-controller-manager Update v1 2023-01-29 15:07:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ae09209-bced-4674-bf21-44fea0384bb2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:07:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.79\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fvg9d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fvg9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-biy623,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:07:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:07:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.79,StartTime:2023-01-29 15:07:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:07:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://581d89594c50e8aa8af2edf28b93217c37d6ab859fd0a3087ed6013d616c3db2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:07:16.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-3952" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":33,"skipped":647,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:07:16.708: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Jan 29 15:07:16.730: INFO: namespace kubectl-9302 Jan 29 15:07:16.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9302 create -f -' Jan 29 15:07:17.346: INFO: stderr: "" Jan 29 15:07:17.346: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 29 15:07:18.350: INFO: Selector matched 1 pods for map[app:agnhost] Jan 29 15:07:18.350: INFO: Found 0 / 1 Jan 29 15:07:19.351: INFO: Selector matched 1 pods for map[app:agnhost] Jan 29 15:07:19.351: INFO: Found 1 / 1 Jan 29 15:07:19.351: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 29 15:07:19.354: INFO: Selector matched 1 pods for map[app:agnhost] Jan 29 15:07:19.354: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 29 15:07:19.354: INFO: wait on agnhost-primary startup in kubectl-9302 Jan 29 15:07:19.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9302 logs agnhost-primary-zdhmt agnhost-primary' Jan 29 15:07:19.430: INFO: stderr: "" Jan 29 15:07:19.430: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Jan 29 15:07:19.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9302 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 29 15:07:19.519: INFO: stderr: "" Jan 29 15:07:19.519: INFO: stdout: "service/rm2 exposed\n" Jan 29 15:07:19.529: INFO: Service rm2 in namespace kubectl-9302 found. �[1mSTEP�[0m: exposing service Jan 29 15:07:21.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9302 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 29 15:07:21.626: INFO: stderr: "" Jan 29 15:07:21.626: INFO: stdout: "service/rm3 exposed\n" Jan 29 15:07:21.630: INFO: Service rm3 in namespace kubectl-9302 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:07:23.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9302" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":34,"skipped":648,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:07:23.682: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 29 15:07:24.245: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 29 15:07:27.264: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:07:27.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4418" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4418-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":35,"skipped":670,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:07:27.502: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-117e93c3-22c3-49a4-91fa-6866527dddca �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 29 15:07:27.533: INFO: Waiting up to 5m0s for pod "pod-secrets-111869cb-0a60-4d3a-a49e-a38374912420" in namespace "secrets-4974" to be "Succeeded or Failed" Jan 29 15:07:27.537: INFO: Pod "pod-secrets-111869cb-0a60-4d3a-a49e-a38374912420": Phase="Pending", Reason="", readiness=false. Elapsed: 3.12888ms Jan 29 15:07:29.540: INFO: Pod "pod-secrets-111869cb-0a60-4d3a-a49e-a38374912420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007017535s Jan 29 15:07:31.550: INFO: Pod "pod-secrets-111869cb-0a60-4d3a-a49e-a38374912420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016655361s �[1mSTEP�[0m: Saw pod success Jan 29 15:07:31.550: INFO: Pod "pod-secrets-111869cb-0a60-4d3a-a49e-a38374912420" satisfied condition "Succeeded or Failed" Jan 29 15:07:31.554: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-secrets-111869cb-0a60-4d3a-a49e-a38374912420 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:07:31.572: INFO: Waiting for pod pod-secrets-111869cb-0a60-4d3a-a49e-a38374912420 to disappear Jan 29 15:07:31.575: INFO: Pod pod-secrets-111869cb-0a60-4d3a-a49e-a38374912420 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:07:31.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-4974" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":694,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:07:31.598: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should list and delete a collection of ReplicaSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create a ReplicaSet �[1mSTEP�[0m: Verify that the required pods have come up Jan 29 15:07:31.626: INFO: Pod name sample-pod: Found 0 pods out of 3 Jan 29 15:07:36.631: INFO: Pod name sample-pod: Found 3 pods out of 3 �[1mSTEP�[0m: ensuring each pod is running Jan 29 15:07:36.634: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} �[1mSTEP�[0m: Listing all ReplicaSets �[1mSTEP�[0m: DeleteCollection of the ReplicaSets �[1mSTEP�[0m: After DeleteCollection verify that ReplicaSets have been deleted [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:07:36.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-8189" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":37,"skipped":704,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:07:36.700: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:07:36.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "lease-test-6051" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":38,"skipped":727,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:07:36.797: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-74a10e6e-781f-4dcd-a165-38aad33833fc �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 29 15:07:36.895: INFO: Waiting up to 5m0s for pod "pod-secrets-c4f5b348-6aab-4afa-a02a-78d2c8f37ff9" in namespace "secrets-7149" to be "Succeeded or Failed" Jan 29 15:07:36.902: INFO: Pod "pod-secrets-c4f5b348-6aab-4afa-a02a-78d2c8f37ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299201ms Jan 29 15:07:38.908: INFO: Pod "pod-secrets-c4f5b348-6aab-4afa-a02a-78d2c8f37ff9": Phase="Running", Reason="", readiness=false. Elapsed: 2.012598502s Jan 29 15:07:40.912: INFO: Pod "pod-secrets-c4f5b348-6aab-4afa-a02a-78d2c8f37ff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016537114s �[1mSTEP�[0m: Saw pod success Jan 29 15:07:40.912: INFO: Pod "pod-secrets-c4f5b348-6aab-4afa-a02a-78d2c8f37ff9" satisfied condition "Succeeded or Failed" Jan 29 15:07:40.915: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-secrets-c4f5b348-6aab-4afa-a02a-78d2c8f37ff9 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:07:40.927: INFO: Waiting for pod pod-secrets-c4f5b348-6aab-4afa-a02a-78d2c8f37ff9 to disappear Jan 29 15:07:40.930: INFO: Pod pod-secrets-c4f5b348-6aab-4afa-a02a-78d2c8f37ff9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:07:40.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-7149" for this suite. �[1mSTEP�[0m: Destroying namespace "secret-namespace-2415" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":728,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:07:40.952: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Jan 29 15:09:41.502: INFO: Successfully updated pod "var-expansion-923cd2c8-16bc-4d9c-820a-ed291d0cd9c7" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 29 15:09:43.508: INFO: Deleting pod "var-expansion-923cd2c8-16bc-4d9c-820a-ed291d0cd9c7" in namespace "var-expansion-6763" Jan 29 15:09:43.514: INFO: Wait up to 5m0s for pod "var-expansion-923cd2c8-16bc-4d9c-820a-ed291d0cd9c7" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:15.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-6763" for this suite. �[32m• [SLOW TEST:154.581 seconds]�[0m [sig-node] Variable Expansion �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":40,"skipped":734,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:15.555: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-projected-all-test-volume-ab383810-9c27-4f37-b7e5-9b5f9192b73d �[1mSTEP�[0m: Creating secret with name secret-projected-all-test-volume-3884fa4d-e153-4a58-aba1-051e80742642 �[1mSTEP�[0m: Creating a pod to test Check all projections for projected volume plugin Jan 29 15:10:15.603: INFO: Waiting up to 5m0s for pod "projected-volume-65927276-b67f-4f2f-98fb-d666846a4f9a" in namespace "projected-2789" to be "Succeeded or Failed" Jan 29 15:10:15.606: INFO: Pod "projected-volume-65927276-b67f-4f2f-98fb-d666846a4f9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.607205ms Jan 29 15:10:17.612: INFO: Pod "projected-volume-65927276-b67f-4f2f-98fb-d666846a4f9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008301137s Jan 29 15:10:19.617: INFO: Pod "projected-volume-65927276-b67f-4f2f-98fb-d666846a4f9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013617644s �[1mSTEP�[0m: Saw pod success Jan 29 15:10:19.617: INFO: Pod "projected-volume-65927276-b67f-4f2f-98fb-d666846a4f9a" satisfied condition "Succeeded or Failed" Jan 29 15:10:19.621: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod projected-volume-65927276-b67f-4f2f-98fb-d666846a4f9a container projected-all-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:10:19.647: INFO: Waiting for pod projected-volume-65927276-b67f-4f2f-98fb-d666846a4f9a to disappear Jan 29 15:10:19.652: INFO: Pod projected-volume-65927276-b67f-4f2f-98fb-d666846a4f9a no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:19.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2789" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":747,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:06:20.795: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod liveness-23545376-541f-46b0-a2bd-ed1bb5082604 in namespace container-probe-8333 Jan 29 15:06:22.829: INFO: Started pod liveness-23545376-541f-46b0-a2bd-ed1bb5082604 in namespace container-probe-8333 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 29 15:06:22.832: INFO: Initial restart count of pod liveness-23545376-541f-46b0-a2bd-ed1bb5082604 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:23.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-8333" for this suite. �[32m• [SLOW TEST:242.618 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":982,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:05:24.116: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 29 15:05:24.155: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:26.159: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 29 15:05:26.173: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:28.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:30.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:32.185: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:34.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:36.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:38.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:40.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:42.180: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:44.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:46.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:48.180: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:50.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:52.180: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:54.181: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:56.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:05:58.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:00.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:02.179: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:04.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:06.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:08.219: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:10.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:12.180: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:14.180: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:16.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:18.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:20.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:22.176: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:24.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:26.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:28.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:30.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:32.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:34.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:36.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:38.179: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:40.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:42.183: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:44.179: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:46.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:48.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:50.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:52.181: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:54.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:56.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:06:58.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:00.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:02.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:04.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:06.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:08.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:10.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:12.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:14.179: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:16.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:18.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:20.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:22.180: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:24.179: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:26.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:28.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:30.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:32.182: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:34.177: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:36.178: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:38.179: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:07:40.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:07:42.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:07:44.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:07:46.180: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:07:48.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:07:50.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:07:52.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:07:54.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:07:56.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:07:58.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:00.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:02.181: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:04.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:06.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:08.180: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:10.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:12.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:14.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:16.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:18.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:20.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:22.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:24.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:26.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:28.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:30.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:32.181: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:34.180: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:36.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:38.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:40.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:42.180: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:44.180: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:46.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:48.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:50.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:52.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:54.181: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:56.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:08:58.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:00.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:02.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:04.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:06.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:08.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:10.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:12.180: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:14.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:16.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:18.180: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:20.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:22.181: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:24.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:26.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:28.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:30.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:32.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:34.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:36.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:38.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:40.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:42.183: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:44.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:46.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:48.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:50.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:52.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:54.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:56.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:09:58.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:00.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:02.180: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:04.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:06.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:08.177: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:10.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:12.183: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:14.180: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:16.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:18.180: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:20.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:22.179: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:24.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:26.178: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:26.181: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 29 15:10:26.181: FAIL: Unexpected error: <*errors.errorString | 0xc0002da240>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc0029489f0, 0x0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.2(0xc004153c00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:72 +0x73 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:105 +0x335 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000c75ba0, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:26.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-7524" for this suite. �[91m�[1m• Failure [302.074 seconds]�[0m [sig-node] Container Lifecycle Hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m when create a pod with lifecycle hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44�[0m �[91m�[1mshould execute poststart exec hook properly [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 29 15:10:26.181: Unexpected error: <*errors.errorString | 0xc0002da240>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":202,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:26.193: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 29 15:10:26.232: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:10:28.237: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 29 15:10:28.247: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:10:30.251: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 29 15:10:30.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 15:10:30.282: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 15:10:32.283: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 15:10:32.287: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 15:10:34.283: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 15:10:34.287: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:34.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-5141" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":202,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:19.695: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota with terminating scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not terminating scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a long running pod �[1mSTEP�[0m: Ensuring resource quota with not terminating scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with terminating scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a terminating pod �[1mSTEP�[0m: Ensuring resource quota with terminating scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not terminating scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:35.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-6443" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":42,"skipped":766,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:35.854: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace �[1mSTEP�[0m: listing events with field selection filtering on source �[1mSTEP�[0m: listing events with field selection filtering on reportingController �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: updating the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:35.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-8007" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":43,"skipped":787,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:34.337: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 29 15:10:34.363: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fe09fb6-73ac-4be4-bf7d-ec552a83da45" in namespace "projected-4603" to be "Succeeded or Failed" Jan 29 15:10:34.365: INFO: Pod "downwardapi-volume-8fe09fb6-73ac-4be4-bf7d-ec552a83da45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470275ms Jan 29 15:10:36.370: INFO: Pod "downwardapi-volume-8fe09fb6-73ac-4be4-bf7d-ec552a83da45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006716207s Jan 29 15:10:38.374: INFO: Pod "downwardapi-volume-8fe09fb6-73ac-4be4-bf7d-ec552a83da45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010954276s �[1mSTEP�[0m: Saw pod success Jan 29 15:10:38.374: INFO: Pod "downwardapi-volume-8fe09fb6-73ac-4be4-bf7d-ec552a83da45" satisfied condition "Succeeded or Failed" Jan 29 15:10:38.376: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod downwardapi-volume-8fe09fb6-73ac-4be4-bf7d-ec552a83da45 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:10:38.403: INFO: Waiting for pod downwardapi-volume-8fe09fb6-73ac-4be4-bf7d-ec552a83da45 to disappear Jan 29 15:10:38.406: INFO: Pod downwardapi-volume-8fe09fb6-73ac-4be4-bf7d-ec552a83da45 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:38.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4603" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":235,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:23.451: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename hostport �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Jan 29 15:10:23.490: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:10:25.494: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.4 on the node which pod1 resides and expect scheduled Jan 29 15:10:25.505: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:10:27.509: INFO: The status of Pod pod2 is Running (Ready = false) Jan 29 15:10:29.510: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.4 but use UDP protocol on the node which pod2 resides Jan 29 15:10:29.523: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:10:31.528: INFO: The status of Pod pod3 is Running (Ready = true) Jan 29 15:10:31.539: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:10:33.544: INFO: The status of Pod e2e-host-exec is Running (Ready = true) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Jan 29 15:10:33.546: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.4 http://127.0.0.1:54323/hostname] Namespace:hostport-6457 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:10:33.546: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:10:33.547: INFO: ExecWithOptions: Clientset creation Jan 29 15:10:33.547: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-6457/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.18.0.4+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.4, port: 54323 Jan 29 15:10:33.628: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.4:54323/hostname] Namespace:hostport-6457 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:10:33.628: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:10:33.629: INFO: ExecWithOptions: Clientset creation Jan 29 15:10:33.629: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-6457/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F172.18.0.4%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.4, port: 54323 UDP Jan 29 15:10:33.717: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.4 54323] Namespace:hostport-6457 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:10:33.717: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:10:33.718: INFO: ExecWithOptions: Clientset creation Jan 29 15:10:33.718: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-6457/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+172.18.0.4+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:38.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "hostport-6457" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":45,"skipped":1007,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:38.832: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingress �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 29 15:10:38.914: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Jan 29 15:10:38.929: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 29 15:10:38.960: INFO: waiting for watch events with expected annotations Jan 29 15:10:38.960: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:38.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingress-2075" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":46,"skipped":1019,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:35.953: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-8f717cf6-06de-4db7-996a-71478cdb9718 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 29 15:10:35.989: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c24cde6-d5d6-4ae0-bec5-f103b7a5c9d8" in namespace "projected-519" to be "Succeeded or Failed" Jan 29 15:10:35.992: INFO: Pod "pod-projected-secrets-9c24cde6-d5d6-4ae0-bec5-f103b7a5c9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.66664ms Jan 29 15:10:37.995: INFO: Pod "pod-projected-secrets-9c24cde6-d5d6-4ae0-bec5-f103b7a5c9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006541465s Jan 29 15:10:40.000: INFO: Pod "pod-projected-secrets-9c24cde6-d5d6-4ae0-bec5-f103b7a5c9d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011091677s �[1mSTEP�[0m: Saw pod success Jan 29 15:10:40.000: INFO: Pod "pod-projected-secrets-9c24cde6-d5d6-4ae0-bec5-f103b7a5c9d8" satisfied condition "Succeeded or Failed" Jan 29 15:10:40.003: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod pod-projected-secrets-9c24cde6-d5d6-4ae0-bec5-f103b7a5c9d8 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:10:40.044: INFO: Waiting for pod pod-projected-secrets-9c24cde6-d5d6-4ae0-bec5-f103b7a5c9d8 to disappear Jan 29 15:10:40.048: INFO: Pod pod-projected-secrets-9c24cde6-d5d6-4ae0-bec5-f103b7a5c9d8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:40.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-519" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":799,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:39.007: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Jan 29 15:10:39.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1055 create -f -' Jan 29 15:10:40.018: INFO: stderr: "" Jan 29 15:10:40.018: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 29 15:10:41.024: INFO: Selector matched 1 pods for map[app:agnhost] Jan 29 15:10:41.024: INFO: Found 0 / 1 Jan 29 15:10:42.023: INFO: Selector matched 1 pods for map[app:agnhost] Jan 29 15:10:42.023: INFO: Found 1 / 1 Jan 29 15:10:42.023: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Jan 29 15:10:42.026: INFO: Selector matched 1 pods for map[app:agnhost] Jan 29 15:10:42.026: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 29 15:10:42.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1055 patch pod agnhost-primary-bk55l -p {"metadata":{"annotations":{"x":"y"}}}' Jan 29 15:10:42.119: INFO: stderr: "" Jan 29 15:10:42.119: INFO: stdout: "pod/agnhost-primary-bk55l patched\n" �[1mSTEP�[0m: checking annotations Jan 29 15:10:42.122: INFO: Selector matched 1 pods for map[app:agnhost] Jan 29 15:10:42.122: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:42.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1055" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":47,"skipped":1021,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:42.144: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:10:42.174: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 29 15:10:47.182: INFO: Pod name cleanup-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 29 15:10:47.182: INFO: Creating deployment test-cleanup-deployment �[1mSTEP�[0m: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 29 15:10:47.215: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2598 9c9585ee-ed0d-4941-9ff5-33c54cb587ad 14050 1 2023-01-29 15:10:47 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-01-29 15:10:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003d0ac18 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 29 15:10:47.219: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jan 29 15:10:47.219: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 29 15:10:47.219: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2598 20415cd3-6172-493c-af5f-c9e7f308c12b 14052 1 2023-01-29 15:10:42 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 9c9585ee-ed0d-4941-9ff5-33c54cb587ad 0xc00438bdc7 0xc00438bdc8}] [] [{e2e.test Update apps/v1 2023-01-29 15:10:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:10:43 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-01-29 15:10:47 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"9c9585ee-ed0d-4941-9ff5-33c54cb587ad\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00438be88 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 29 15:10:47.224: INFO: Pod "test-cleanup-controller-qngfl" is available: &Pod{ObjectMeta:{test-cleanup-controller-qngfl test-cleanup-controller- deployment-2598 0ddb97ea-a4eb-4c14-bf0a-b76272474435 13982 0 2023-01-29 15:10:42 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 20415cd3-6172-493c-af5f-c9e7f308c12b 0xc003d0af77 0xc003d0af78}] [] [{kube-controller-manager Update v1 2023-01-29 15:10:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20415cd3-6172-493c-af5f-c9e7f308c12b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:10:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.72\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g9gb8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g9gb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:10:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:10:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:10:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:10:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.1.72,StartTime:2023-01-29 15:10:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:10:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://adb3a6815b53e1b6c2e050719fb53ec4faa0da81e7c10ca95f97b4995be480c6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.72,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:47.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-2598" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":48,"skipped":1026,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:38.427: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 29 15:10:38.449: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:10:40.530: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:51.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-3069" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":14,"skipped":245,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:47.348: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap configmap-1262/configmap-test-137e5204-1987-475d-83b0-af4c46b8adb7 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 29 15:10:47.439: INFO: Waiting up to 5m0s for pod "pod-configmaps-35c7bc87-384c-482e-be3e-17f38bc75236" in namespace "configmap-1262" to be "Succeeded or Failed" Jan 29 15:10:47.443: INFO: Pod "pod-configmaps-35c7bc87-384c-482e-be3e-17f38bc75236": Phase="Pending", Reason="", readiness=false. Elapsed: 3.48249ms Jan 29 15:10:49.449: INFO: Pod "pod-configmaps-35c7bc87-384c-482e-be3e-17f38bc75236": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009108984s Jan 29 15:10:51.460: INFO: Pod "pod-configmaps-35c7bc87-384c-482e-be3e-17f38bc75236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020197799s �[1mSTEP�[0m: Saw pod success Jan 29 15:10:51.460: INFO: Pod "pod-configmaps-35c7bc87-384c-482e-be3e-17f38bc75236" satisfied condition "Succeeded or Failed" Jan 29 15:10:51.464: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx pod pod-configmaps-35c7bc87-384c-482e-be3e-17f38bc75236 container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:10:51.481: INFO: Waiting for pod pod-configmaps-35c7bc87-384c-482e-be3e-17f38bc75236 to disappear Jan 29 15:10:51.484: INFO: Pod pod-configmaps-35c7bc87-384c-482e-be3e-17f38bc75236 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:51.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1262" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1079,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:51.426: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:10:51.456: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:54.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-8519" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":15,"skipped":260,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:51.509: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating replica set "test-rs" that asks for more than the allowed pod quota Jan 29 15:10:51.538: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 29 15:10:56.548: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the replicaset Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:10:56.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-2712" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":50,"skipped":1091,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:54.714: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted Jan 29 15:11:00.897: INFO: 80 pods remaining Jan 29 15:11:00.897: INFO: 80 pods has nil DeletionTimestamp Jan 29 15:11:00.897: INFO: Jan 29 15:11:01.813: INFO: 71 pods remaining Jan 29 15:11:01.813: INFO: 71 pods has nil DeletionTimestamp Jan 29 15:11:01.813: INFO: Jan 29 15:11:02.805: INFO: 60 pods remaining Jan 29 15:11:02.805: INFO: 60 pods has nil DeletionTimestamp Jan 29 15:11:02.805: INFO: Jan 29 15:11:03.820: INFO: 40 pods remaining Jan 29 15:11:03.820: INFO: 40 pods has nil DeletionTimestamp Jan 29 15:11:03.820: INFO: Jan 29 15:11:04.801: INFO: 31 pods remaining Jan 29 15:11:04.801: INFO: 31 pods has nil DeletionTimestamp Jan 29 15:11:04.801: INFO: Jan 29 15:11:05.806: INFO: 20 pods remaining Jan 29 15:11:05.806: INFO: 20 pods has nil DeletionTimestamp Jan 29 15:11:05.806: INFO: �[1mSTEP�[0m: Gathering metrics Jan 29 15:11:06.832: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-pw1vby-8nwgl-sl9bk is Running (Ready = true) Jan 29 15:11:06.944: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:06.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-5798" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":16,"skipped":356,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:40.087: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 29 15:10:42.136: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:42.141: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:42.144: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:42.148: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:42.151: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:42.156: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:42.159: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:42.163: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:42.163: INFO: Lookups using dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local] Jan 29 15:10:47.168: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:47.171: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:47.174: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:47.179: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:47.183: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:47.187: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:47.190: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:47.194: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:47.194: INFO: Lookups using dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local] Jan 29 15:10:52.167: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:52.170: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:52.174: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:52.177: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:52.180: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:52.183: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:52.186: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:52.190: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:52.190: INFO: Lookups using dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local] Jan 29 15:10:57.258: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:57.331: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:57.338: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:57.364: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:57.380: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:57.426: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:57.453: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:57.493: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:10:57.493: INFO: Lookups using dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local] Jan 29 15:11:02.181: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:02.204: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:02.240: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:02.252: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:02.266: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:02.279: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:02.299: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:02.305: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:02.305: INFO: Lookups using dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local] Jan 29 15:11:07.178: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:07.183: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:07.188: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:07.193: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:07.200: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:07.206: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:07.214: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:07.221: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local from pod dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c: the server could not find the requested resource (get pods dns-test-be584607-deb4-4f03-8aa2-9e920504485c) Jan 29 15:11:07.221: INFO: Lookups using dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6697.svc.cluster.local jessie_udp@dns-test-service-2.dns-6697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6697.svc.cluster.local] Jan 29 15:11:12.195: INFO: DNS probes using dns-6697/dns-test-be584607-deb4-4f03-8aa2-9e920504485c succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:12.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-6697" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":45,"skipped":817,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:12.245: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingressclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:186 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 29 15:11:12.292: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 29 15:11:12.303: INFO: waiting for watch events with expected annotations Jan 29 15:11:12.303: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:12.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingressclass-9101" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":46,"skipped":820,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:10:56.943: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Jan 29 15:10:57.287: INFO: Waiting up to 5m0s for pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb" in namespace "emptydir-5623" to be "Succeeded or Failed" Jan 29 15:10:57.309: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.918679ms Jan 29 15:10:59.319: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031693233s Jan 29 15:11:01.326: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039172369s Jan 29 15:11:03.335: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048351787s Jan 29 15:11:05.339: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052125337s Jan 29 15:11:07.358: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.071607669s Jan 29 15:11:09.363: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.076524338s Jan 29 15:11:11.367: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.080021678s Jan 29 15:11:13.375: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.088321352s Jan 29 15:11:15.380: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.093191668s �[1mSTEP�[0m: Saw pod success Jan 29 15:11:15.380: INFO: Pod "pod-a0ae4cc4-b586-434e-8453-be7854d105eb" satisfied condition "Succeeded or Failed" Jan 29 15:11:15.383: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-a0ae4cc4-b586-434e-8453-be7854d105eb container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:11:15.398: INFO: Waiting for pod pod-a0ae4cc4-b586-434e-8453-be7854d105eb to disappear Jan 29 15:11:15.401: INFO: Pod pod-a0ae4cc4-b586-434e-8453-be7854d105eb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:15.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5623" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":1140,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:12.359: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 29 15:11:12.392: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1258749f-02f1-425f-abc3-5b85353a2d5f" in namespace "projected-6976" to be "Succeeded or Failed" Jan 29 15:11:12.403: INFO: Pod "downwardapi-volume-1258749f-02f1-425f-abc3-5b85353a2d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.413221ms Jan 29 15:11:14.408: INFO: Pod "downwardapi-volume-1258749f-02f1-425f-abc3-5b85353a2d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016120205s Jan 29 15:11:16.412: INFO: Pod "downwardapi-volume-1258749f-02f1-425f-abc3-5b85353a2d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020492718s Jan 29 15:11:18.417: INFO: Pod "downwardapi-volume-1258749f-02f1-425f-abc3-5b85353a2d5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024834385s �[1mSTEP�[0m: Saw pod success Jan 29 15:11:18.417: INFO: Pod "downwardapi-volume-1258749f-02f1-425f-abc3-5b85353a2d5f" satisfied condition "Succeeded or Failed" Jan 29 15:11:18.419: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod downwardapi-volume-1258749f-02f1-425f-abc3-5b85353a2d5f container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:11:18.434: INFO: Waiting for pod downwardapi-volume-1258749f-02f1-425f-abc3-5b85353a2d5f to disappear Jan 29 15:11:18.437: INFO: Pod downwardapi-volume-1258749f-02f1-425f-abc3-5b85353a2d5f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:18.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6976" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":835,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:15.432: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 29 15:11:15.465: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c598e866-ecb6-4a11-9e11-9ee0a078f877" in namespace "projected-6426" to be "Succeeded or Failed" Jan 29 15:11:15.468: INFO: Pod "downwardapi-volume-c598e866-ecb6-4a11-9e11-9ee0a078f877": Phase="Pending", Reason="", readiness=false. Elapsed: 3.129496ms Jan 29 15:11:17.472: INFO: Pod "downwardapi-volume-c598e866-ecb6-4a11-9e11-9ee0a078f877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006665315s Jan 29 15:11:19.476: INFO: Pod "downwardapi-volume-c598e866-ecb6-4a11-9e11-9ee0a078f877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01120217s �[1mSTEP�[0m: Saw pod success Jan 29 15:11:19.476: INFO: Pod "downwardapi-volume-c598e866-ecb6-4a11-9e11-9ee0a078f877" satisfied condition "Succeeded or Failed" Jan 29 15:11:19.479: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod downwardapi-volume-c598e866-ecb6-4a11-9e11-9ee0a078f877 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:11:19.491: INFO: Waiting for pod downwardapi-volume-c598e866-ecb6-4a11-9e11-9ee0a078f877 to disappear Jan 29 15:11:19.497: INFO: Pod downwardapi-volume-c598e866-ecb6-4a11-9e11-9ee0a078f877 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:19.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6426" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":1153,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:18.453: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Pod that fits quota �[1mSTEP�[0m: Ensuring ResourceQuota status captures the pod usage �[1mSTEP�[0m: Not allowing a pod to be created that exceeds remaining quota �[1mSTEP�[0m: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) �[1mSTEP�[0m: Ensuring a pod cannot update its resource requirements �[1mSTEP�[0m: Ensuring attempts to update pod resource requirements did not change quota usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:31.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-8317" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":48,"skipped":840,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:31.574: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pods Jan 29 15:11:31.608: INFO: created test-pod-1 Jan 29 15:11:31.611: INFO: created test-pod-2 Jan 29 15:11:31.621: INFO: created test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be running Jan 29 15:11:31.621: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-1459' to be running and ready Jan 29 15:11:31.642: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:11:31.642: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:11:31.642: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 29 15:11:31.642: INFO: 0 / 3 pods in namespace 'pods-1459' are running and ready (0 seconds elapsed) Jan 29 15:11:31.642: INFO: expected 0 pod replicas in namespace 'pods-1459', 0 are Running and Ready. Jan 29 15:11:31.642: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 15:11:31.642: INFO: test-pod-1 k8s-upgrade-and-conformance-pw1vby-worker-biy623 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:11:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:11:31 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:11:31 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:11:31 +0000 UTC }] Jan 29 15:11:31.642: INFO: test-pod-2 k8s-upgrade-and-conformance-pw1vby-worker-693qzd Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:11:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:11:31 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:11:31 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:11:31 +0000 UTC }] Jan 29 15:11:31.642: INFO: test-pod-3 k8s-upgrade-and-conformance-pw1vby-worker-693qzd Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 15:11:31 +0000 UTC }] Jan 29 15:11:31.642: INFO: Jan 29 15:11:33.654: INFO: 3 / 3 pods in namespace 'pods-1459' are running and ready (2 seconds elapsed) Jan 29 15:11:33.654: INFO: expected 0 pod replicas in namespace 'pods-1459', 0 are Running and Ready. �[1mSTEP�[0m: waiting for all pods to be deleted Jan 29 15:11:33.671: INFO: Pod quantity 3 is different from expected quantity 0 Jan 29 15:11:34.675: INFO: Pod quantity 3 is different from expected quantity 0 Jan 29 15:11:35.676: INFO: Pod quantity 3 is different from expected quantity 0 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:36.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-1459" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":49,"skipped":865,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:36.700: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-1a141376-709c-4df3-91c2-f18893767051 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 29 15:11:36.737: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9b4b1b42-1209-4330-8b6a-dfc82b35b77e" in namespace "projected-3427" to be "Succeeded or Failed" Jan 29 15:11:36.740: INFO: Pod "pod-projected-configmaps-9b4b1b42-1209-4330-8b6a-dfc82b35b77e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.03792ms Jan 29 15:11:38.744: INFO: Pod "pod-projected-configmaps-9b4b1b42-1209-4330-8b6a-dfc82b35b77e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007293813s Jan 29 15:11:40.748: INFO: Pod "pod-projected-configmaps-9b4b1b42-1209-4330-8b6a-dfc82b35b77e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011801675s �[1mSTEP�[0m: Saw pod success Jan 29 15:11:40.748: INFO: Pod "pod-projected-configmaps-9b4b1b42-1209-4330-8b6a-dfc82b35b77e" satisfied condition "Succeeded or Failed" Jan 29 15:11:40.751: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod pod-projected-configmaps-9b4b1b42-1209-4330-8b6a-dfc82b35b77e container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:11:40.765: INFO: Waiting for pod pod-projected-configmaps-9b4b1b42-1209-4330-8b6a-dfc82b35b77e to disappear Jan 29 15:11:40.768: INFO: Pod pod-projected-configmaps-9b4b1b42-1209-4330-8b6a-dfc82b35b77e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:40.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3427" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":872,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:40.818: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:11:40.842: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:41.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-498" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":51,"skipped":906,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:41.902: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename e2e-kubelet-etc-hosts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Setting up the test �[1mSTEP�[0m: Creating hostNetwork=false pod Jan 29 15:11:41.934: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:11:43.938: INFO: The status of Pod test-pod is Running (Ready = true) �[1mSTEP�[0m: Creating hostNetwork=true pod Jan 29 15:11:43.952: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:11:45.956: INFO: The status of Pod test-host-network-pod is Running (Ready = true) �[1mSTEP�[0m: Running the test �[1mSTEP�[0m: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 29 15:11:45.958: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1249 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:11:45.958: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:11:45.959: INFO: ExecWithOptions: Clientset creation Jan 29 15:11:45.959: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1249/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:11:46.038: INFO: Exec stderr: "" Jan 29 15:11:46.038: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1249 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:11:46.038: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:11:46.038: INFO: ExecWithOptions: Clientset creation Jan 29 15:11:46.038: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1249/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:11:46.088: INFO: Exec stderr: "" Jan 29 15:11:46.088: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1249 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:11:46.088: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:11:46.089: INFO: ExecWithOptions: Clientset creation Jan 29 15:11:46.089: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1249/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:11:46.165: INFO: Exec stderr: "" Jan 29 15:11:46.165: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1249 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:11:46.165: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:11:46.166: INFO: ExecWithOptions: Clientset creation Jan 29 15:11:46.166: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1249/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:11:46.244: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 29 15:11:46.244: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1249 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:11:46.244: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:11:46.245: INFO: ExecWithOptions: Clientset creation Jan 29 15:11:46.245: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1249/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:11:46.295: INFO: Exec stderr: "" Jan 29 15:11:46.295: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1249 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:11:46.295: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:11:46.296: INFO: ExecWithOptions: Clientset creation Jan 29 15:11:46.296: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1249/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:11:46.368: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 29 15:11:46.368: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1249 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:11:46.368: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:11:46.369: INFO: ExecWithOptions: Clientset creation Jan 29 15:11:46.369: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1249/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:11:46.449: INFO: Exec stderr: "" Jan 29 15:11:46.449: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1249 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:11:46.449: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:11:46.450: INFO: ExecWithOptions: Clientset creation Jan 29 15:11:46.450: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1249/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:11:46.529: INFO: Exec stderr: "" Jan 29 15:11:46.529: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1249 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:11:46.529: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:11:46.529: INFO: ExecWithOptions: Clientset creation Jan 29 15:11:46.529: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1249/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:11:46.603: INFO: Exec stderr: "" Jan 29 15:11:46.603: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1249 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 29 15:11:46.603: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 29 15:11:46.604: INFO: ExecWithOptions: Clientset creation Jan 29 15:11:46.604: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1249/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 29 15:11:46.655: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:46.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "e2e-kubelet-etc-hosts-1249" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":927,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:46.679: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-526.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-526.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-526.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-526.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 29 15:11:48.746: INFO: DNS probes using dns-526/dns-test-12deeac9-67ba-4be5-b20d-427218cf676e succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:48.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-526" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":53,"skipped":937,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:48.781: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 29 15:11:48.809: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f29b05b-540e-41fe-a267-465f00d9298f" in namespace "projected-8664" to be "Succeeded or Failed" Jan 29 15:11:48.812: INFO: Pod "downwardapi-volume-9f29b05b-540e-41fe-a267-465f00d9298f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.552011ms Jan 29 15:11:50.816: INFO: Pod "downwardapi-volume-9f29b05b-540e-41fe-a267-465f00d9298f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006737697s Jan 29 15:11:52.820: INFO: Pod "downwardapi-volume-9f29b05b-540e-41fe-a267-465f00d9298f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010438275s �[1mSTEP�[0m: Saw pod success Jan 29 15:11:52.820: INFO: Pod "downwardapi-volume-9f29b05b-540e-41fe-a267-465f00d9298f" satisfied condition "Succeeded or Failed" Jan 29 15:11:52.823: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod downwardapi-volume-9f29b05b-540e-41fe-a267-465f00d9298f container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:11:52.838: INFO: Waiting for pod downwardapi-volume-9f29b05b-540e-41fe-a267-465f00d9298f to disappear Jan 29 15:11:52.841: INFO: Pod downwardapi-volume-9f29b05b-540e-41fe-a267-465f00d9298f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:52.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8664" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":937,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:52.950: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:11:52.987: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:53.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-5566" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":55,"skipped":999,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:53.534: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-0fcaf9ce-72b6-4404-834c-637e7a2832c0 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 29 15:11:53.565: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed516667-dc86-4cf6-9c36-c394c3812d32" in namespace "configmap-3739" to be "Succeeded or Failed" Jan 29 15:11:53.568: INFO: Pod "pod-configmaps-ed516667-dc86-4cf6-9c36-c394c3812d32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.803617ms Jan 29 15:11:55.572: INFO: Pod "pod-configmaps-ed516667-dc86-4cf6-9c36-c394c3812d32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007564674s Jan 29 15:11:57.577: INFO: Pod "pod-configmaps-ed516667-dc86-4cf6-9c36-c394c3812d32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012396604s �[1mSTEP�[0m: Saw pod success Jan 29 15:11:57.577: INFO: Pod "pod-configmaps-ed516667-dc86-4cf6-9c36-c394c3812d32" satisfied condition "Succeeded or Failed" Jan 29 15:11:57.580: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-configmaps-ed516667-dc86-4cf6-9c36-c394c3812d32 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:11:57.592: INFO: Waiting for pod pod-configmaps-ed516667-dc86-4cf6-9c36-c394c3812d32 to disappear Jan 29 15:11:57.595: INFO: Pod pod-configmaps-ed516667-dc86-4cf6-9c36-c394c3812d32 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:11:57.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3739" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":1000,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:07:01.430: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a suspended cronjob �[1mSTEP�[0m: Ensuring no jobs are scheduled �[1mSTEP�[0m: Ensuring no job exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:01.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-1303" for this suite. �[32m• [SLOW TEST:300.050 seconds]�[0m [sig-apps] CronJob �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":26,"skipped":472,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:57.632: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 29 15:12:01.685: INFO: Expected: &{OK} to match Container's Termination Message: OK -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:01.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-5091" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1020,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:01.740: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 29 15:12:01.771: INFO: Waiting up to 5m0s for pod "pod-15391fde-652f-40ca-bcd0-65e5a3698c46" in namespace "emptydir-4348" to be "Succeeded or Failed" Jan 29 15:12:01.773: INFO: Pod "pod-15391fde-652f-40ca-bcd0-65e5a3698c46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637228ms Jan 29 15:12:03.777: INFO: Pod "pod-15391fde-652f-40ca-bcd0-65e5a3698c46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005854095s Jan 29 15:12:05.781: INFO: Pod "pod-15391fde-652f-40ca-bcd0-65e5a3698c46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010251938s �[1mSTEP�[0m: Saw pod success Jan 29 15:12:05.781: INFO: Pod "pod-15391fde-652f-40ca-bcd0-65e5a3698c46" satisfied condition "Succeeded or Failed" Jan 29 15:12:05.784: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-15391fde-652f-40ca-bcd0-65e5a3698c46 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:12:05.795: INFO: Waiting for pod pod-15391fde-652f-40ca-bcd0-65e5a3698c46 to disappear Jan 29 15:12:05.798: INFO: Pod pod-15391fde-652f-40ca-bcd0-65e5a3698c46 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:05.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4348" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1036,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:01.500: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod Jan 29 15:12:01.518: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:07.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-5255" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":27,"skipped":488,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:07.237: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 29 15:12:07.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96764fee-d47a-4261-85a5-d2bb2ddb5818" in namespace "projected-935" to be "Succeeded or Failed" Jan 29 15:12:07.268: INFO: Pod "downwardapi-volume-96764fee-d47a-4261-85a5-d2bb2ddb5818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.813356ms Jan 29 15:12:09.271: INFO: Pod "downwardapi-volume-96764fee-d47a-4261-85a5-d2bb2ddb5818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00666812s Jan 29 15:12:11.277: INFO: Pod "downwardapi-volume-96764fee-d47a-4261-85a5-d2bb2ddb5818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011759683s �[1mSTEP�[0m: Saw pod success Jan 29 15:12:11.277: INFO: Pod "downwardapi-volume-96764fee-d47a-4261-85a5-d2bb2ddb5818" satisfied condition "Succeeded or Failed" Jan 29 15:12:11.283: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod downwardapi-volume-96764fee-d47a-4261-85a5-d2bb2ddb5818 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:12:11.302: INFO: Waiting for pod downwardapi-volume-96764fee-d47a-4261-85a5-d2bb2ddb5818 to disappear Jan 29 15:12:11.304: INFO: Pod downwardapi-volume-96764fee-d47a-4261-85a5-d2bb2ddb5818 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:11.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-935" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":493,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:05.867: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 29 15:12:05.901: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:12:07.912: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 29 15:12:07.925: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:12:09.929: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 29 15:12:09.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 29 15:12:09.947: INFO: Pod pod-with-poststart-http-hook still exists Jan 29 15:12:11.948: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 29 15:12:11.953: INFO: Pod pod-with-poststart-http-hook still exists Jan 29 15:12:13.947: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 29 15:12:13.951: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:13.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-6624" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1079,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:14.018: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 29 15:12:14.055: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:12:16.060: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 29 15:12:16.072: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:12:18.076: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 29 15:12:18.084: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 15:12:18.087: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 15:12:20.088: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 15:12:20.091: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:20.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-9253" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1122,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:06.986: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-b2cd2e14-7b05-42aa-b74a-c055abc054da �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-8604ec60-b37a-442e-93db-58589f0338ff �[1mSTEP�[0m: Creating the pod Jan 29 15:11:07.098: INFO: The status of Pod pod-projected-configmaps-966bb25f-dfc0-4b7b-a68e-ab0ed619272f is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:11:09.116: INFO: The status of Pod pod-projected-configmaps-966bb25f-dfc0-4b7b-a68e-ab0ed619272f is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:11:11.103: INFO: The status of Pod pod-projected-configmaps-966bb25f-dfc0-4b7b-a68e-ab0ed619272f is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:11:13.102: INFO: The status of Pod pod-projected-configmaps-966bb25f-dfc0-4b7b-a68e-ab0ed619272f is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-b2cd2e14-7b05-42aa-b74a-c055abc054da �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-8604ec60-b37a-442e-93db-58589f0338ff �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-d22f6dd3-8c03-4de6-a8c0-fe112ef3d1ab �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:23.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4452" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":367,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:23.457: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1539 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 29 15:12:23.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2387 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2' Jan 29 15:12:23.559: INFO: stderr: "" Jan 29 15:12:23.559: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543 Jan 29 15:12:23.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2387 delete pods e2e-test-httpd-pod' Jan 29 15:12:25.446: INFO: stderr: "" Jan 29 15:12:25.446: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:25.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2387" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":18,"skipped":375,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:11.322: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Jan 29 15:12:11.345: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: rename a version �[1mSTEP�[0m: check the new version name is served �[1mSTEP�[0m: check the old version name is removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:27.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-947" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":29,"skipped":496,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:27.282: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-53f0cc8c-c071-4fef-a97f-09f111bcd525 �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-890ebe24-5d5c-4b10-8b8c-6044481c74e0 �[1mSTEP�[0m: Creating the pod Jan 29 15:12:27.326: INFO: The status of Pod pod-secrets-20241b20-bc68-4c3d-95fa-98d293de7c07 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:12:29.331: INFO: The status of Pod pod-secrets-20241b20-bc68-4c3d-95fa-98d293de7c07 is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-53f0cc8c-c071-4fef-a97f-09f111bcd525 �[1mSTEP�[0m: Updating secret s-test-opt-upd-890ebe24-5d5c-4b10-8b8c-6044481c74e0 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-53e376d3-c51c-4e3f-b6f4-afcc0f3a149d �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:33.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-5229" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":510,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:33.402: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-map-50a503f7-b7f2-44e1-9b14-faf0cdaa6680 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 29 15:12:33.436: INFO: Waiting up to 5m0s for pod "pod-secrets-2ba57bb2-574a-4692-8d9c-8e113d5bd01b" in namespace "secrets-8659" to be "Succeeded or Failed" Jan 29 15:12:33.438: INFO: Pod "pod-secrets-2ba57bb2-574a-4692-8d9c-8e113d5bd01b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.574991ms Jan 29 15:12:35.443: INFO: Pod "pod-secrets-2ba57bb2-574a-4692-8d9c-8e113d5bd01b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00681339s Jan 29 15:12:37.447: INFO: Pod "pod-secrets-2ba57bb2-574a-4692-8d9c-8e113d5bd01b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011458566s �[1mSTEP�[0m: Saw pod success Jan 29 15:12:37.447: INFO: Pod "pod-secrets-2ba57bb2-574a-4692-8d9c-8e113d5bd01b" satisfied condition "Succeeded or Failed" Jan 29 15:12:37.450: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod pod-secrets-2ba57bb2-574a-4692-8d9c-8e113d5bd01b container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:12:37.463: INFO: Waiting for pod pod-secrets-2ba57bb2-574a-4692-8d9c-8e113d5bd01b to disappear Jan 29 15:12:37.466: INFO: Pod pod-secrets-2ba57bb2-574a-4692-8d9c-8e113d5bd01b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:37.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8659" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":516,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:37.494: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod with the kernel.shm_rmid_forced sysctl �[1mSTEP�[0m: Watching for error events or started pod �[1mSTEP�[0m: Waiting for pod completion �[1mSTEP�[0m: Checking that the pod succeeded �[1mSTEP�[0m: Getting logs from the pod �[1mSTEP�[0m: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:41.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-3521" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":32,"skipped":533,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:25.461: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-dnj8 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 29 15:12:25.497: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dnj8" in namespace "subpath-4533" to be "Succeeded or Failed" Jan 29 15:12:25.501: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.77404ms Jan 29 15:12:27.505: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=true. Elapsed: 2.00739205s Jan 29 15:12:29.509: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=true. Elapsed: 4.011905382s Jan 29 15:12:31.513: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=true. Elapsed: 6.015347985s Jan 29 15:12:33.517: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=true. Elapsed: 8.02027132s Jan 29 15:12:35.521: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=true. Elapsed: 10.024149024s Jan 29 15:12:37.528: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=true. Elapsed: 12.030927039s Jan 29 15:12:39.536: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=true. Elapsed: 14.038795121s Jan 29 15:12:41.543: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=true. Elapsed: 16.045363097s Jan 29 15:12:43.549: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=true. Elapsed: 18.051573349s Jan 29 15:12:45.556: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=true. Elapsed: 20.05838238s Jan 29 15:12:47.562: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Running", Reason="", readiness=false. Elapsed: 22.065190282s Jan 29 15:12:49.568: INFO: Pod "pod-subpath-test-configmap-dnj8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.07124384s �[1mSTEP�[0m: Saw pod success Jan 29 15:12:49.569: INFO: Pod "pod-subpath-test-configmap-dnj8" satisfied condition "Succeeded or Failed" Jan 29 15:12:49.573: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-subpath-test-configmap-dnj8 container test-container-subpath-configmap-dnj8: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:12:49.600: INFO: Waiting for pod pod-subpath-test-configmap-dnj8 to disappear Jan 29 15:12:49.605: INFO: Pod pod-subpath-test-configmap-dnj8 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-dnj8 Jan 29 15:12:49.605: INFO: Deleting pod "pod-subpath-test-configmap-dnj8" in namespace "subpath-4533" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:49.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4533" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":19,"skipped":378,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:11:19.539: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-a8bfc0ec-aaaa-4cc4-8eca-f4178132bd7d �[1mSTEP�[0m: Creating the pod Jan 29 15:11:19.576: INFO: The status of Pod pod-configmaps-397f1ce5-d846-482c-baf5-835220f72924 is Pending, waiting for it to be Running (with Ready = true) Jan 29 15:11:21.581: INFO: The status of Pod pod-configmaps-397f1ce5-d846-482c-baf5-835220f72924 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap configmap-test-upd-a8bfc0ec-aaaa-4cc4-8eca-f4178132bd7d �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:49.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1177" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1176,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:41.614: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for all pods to be garbage collected �[1mSTEP�[0m: Gathering metrics Jan 29 15:12:51.716: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-pw1vby-8nwgl-sl9bk is Running (Ready = true) Jan 29 15:12:51.813: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:51.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-1851" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":33,"skipped":551,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:49.671: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-5480945a-cc9f-4100-aef2-bd402067d560 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 29 15:12:49.726: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c04ae692-e16d-4694-9ca8-2e32ec0ec135" in namespace "projected-4837" to be "Succeeded or Failed" Jan 29 15:12:49.729: INFO: Pod "pod-projected-secrets-c04ae692-e16d-4694-9ca8-2e32ec0ec135": Phase="Pending", Reason="", readiness=false. Elapsed: 3.299651ms Jan 29 15:12:51.736: INFO: Pod "pod-projected-secrets-c04ae692-e16d-4694-9ca8-2e32ec0ec135": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009790526s Jan 29 15:12:53.744: INFO: Pod "pod-projected-secrets-c04ae692-e16d-4694-9ca8-2e32ec0ec135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017814755s �[1mSTEP�[0m: Saw pod success Jan 29 15:12:53.744: INFO: Pod "pod-projected-secrets-c04ae692-e16d-4694-9ca8-2e32ec0ec135" satisfied condition "Succeeded or Failed" Jan 29 15:12:53.750: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-biy623 pod pod-projected-secrets-c04ae692-e16d-4694-9ca8-2e32ec0ec135 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:12:53.776: INFO: Waiting for pod pod-projected-secrets-c04ae692-e16d-4694-9ca8-2e32ec0ec135 to disappear Jan 29 15:12:53.782: INFO: Pod pod-projected-secrets-c04ae692-e16d-4694-9ca8-2e32ec0ec135 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:53.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4837" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":400,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:53.861: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename tables �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:53.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "tables-8069" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":21,"skipped":427,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:51.978: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:12:52.040: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 29 15:12:57.048: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: Scaling up "test-rs" replicaset Jan 29 15:12:57.057: INFO: Updating replica set "test-rs" �[1mSTEP�[0m: patching the ReplicaSet Jan 29 15:12:57.070: INFO: observed ReplicaSet test-rs in namespace replicaset-3308 with ReadyReplicas 1, AvailableReplicas 1 Jan 29 15:12:57.086: INFO: observed ReplicaSet test-rs in namespace replicaset-3308 with ReadyReplicas 1, AvailableReplicas 1 Jan 29 15:12:57.106: INFO: observed ReplicaSet test-rs in namespace replicaset-3308 with ReadyReplicas 1, AvailableReplicas 1 Jan 29 15:12:57.136: INFO: observed ReplicaSet test-rs in namespace replicaset-3308 with ReadyReplicas 1, AvailableReplicas 1 Jan 29 15:12:58.591: INFO: observed ReplicaSet test-rs in namespace replicaset-3308 with ReadyReplicas 2, AvailableReplicas 2 Jan 29 15:12:58.670: INFO: observed Replicaset test-rs in namespace replicaset-3308 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:58.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-3308" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":34,"skipped":611,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:50.033: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating server pod server in namespace prestop-2945 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-2945 �[1mSTEP�[0m: Deleting pre-stop pod Jan 29 15:12:59.175: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } �[1mSTEP�[0m: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:12:59.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-2945" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":54,"skipped":1192,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:59.255: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:12:59.349: INFO: Got root ca configmap in namespace "svcaccounts-1686" Jan 29 15:12:59.364: INFO: Deleted root ca configmap in namespace "svcaccounts-1686" �[1mSTEP�[0m: waiting for a new root ca configmap created Jan 29 15:12:59.878: INFO: Recreated root ca configmap in namespace "svcaccounts-1686" Jan 29 15:12:59.892: INFO: Updated root ca configmap in namespace "svcaccounts-1686" �[1mSTEP�[0m: waiting for the root ca configmap reconciled Jan 29 15:13:00.399: INFO: Reconciled root ca configmap in namespace "svcaccounts-1686" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:13:00.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-1686" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":55,"skipped":1193,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:53.927: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service nodeport-service with the type=NodePort in namespace services-4552 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-4552 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-4552 I0129 15:12:54.056657 16 runners.go:193] Created replication controller with name: externalsvc, namespace: services-4552, replica count: 2 I0129 15:12:57.108540 16 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the NodePort service to type=ExternalName Jan 29 15:12:57.171: INFO: Creating new exec pod Jan 29 15:12:59.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4552 exec execpodfspgf -- /bin/sh -x -c nslookup nodeport-service.services-4552.svc.cluster.local' Jan 29 15:12:59.717: INFO: stderr: "+ nslookup nodeport-service.services-4552.svc.cluster.local\n" Jan 29 15:12:59.717: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nnodeport-service.services-4552.svc.cluster.local\tcanonical name = externalsvc.services-4552.svc.cluster.local.\nName:\texternalsvc.services-4552.svc.cluster.local\nAddress: 10.142.20.168\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-4552, will wait for the garbage collector to delete the pods Jan 29 15:12:59.783: INFO: Deleting ReplicationController externalsvc took: 8.183006ms Jan 29 15:12:59.883: INFO: Terminating ReplicationController externalsvc pods took: 100.160358ms Jan 29 15:13:02.349: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:13:02.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-4552" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":22,"skipped":431,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:12:58.726: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-4aa960fd-fcd7-4b95-8fbb-033540ffd080 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 29 15:12:58.777: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da0ddf2c-1749-44a8-baa9-f5364de39f7d" in namespace "projected-8961" to be "Succeeded or Failed" Jan 29 15:12:58.781: INFO: Pod "pod-projected-configmaps-da0ddf2c-1749-44a8-baa9-f5364de39f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292784ms Jan 29 15:13:00.790: INFO: Pod "pod-projected-configmaps-da0ddf2c-1749-44a8-baa9-f5364de39f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012873805s Jan 29 15:13:02.795: INFO: Pod "pod-projected-configmaps-da0ddf2c-1749-44a8-baa9-f5364de39f7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017867062s �[1mSTEP�[0m: Saw pod success Jan 29 15:13:02.795: INFO: Pod "pod-projected-configmaps-da0ddf2c-1749-44a8-baa9-f5364de39f7d" satisfied condition "Succeeded or Failed" Jan 29 15:13:02.800: INFO: Trying to get logs from node k8s-upgrade-and-conformance-pw1vby-worker-693qzd pod pod-projected-configmaps-da0ddf2c-1749-44a8-baa9-f5364de39f7d container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 29 15:13:02.824: INFO: Waiting for pod pod-projected-configmaps-da0ddf2c-1749-44a8-baa9-f5364de39f7d to disappear Jan 29 15:13:02.829: INFO: Pod pod-projected-configmaps-da0ddf2c-1749-44a8-baa9-f5364de39f7d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:13:02.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8961" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":630,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:13:00.438: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 29 15:13:01.136: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 29 15:13:04.177: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod �[1mSTEP�[0m: 'kubectl attach' the pod, should be denied by the webhook Jan 29 15:13:07.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=webhook-8518 attach --namespace=webhook-8518 to-be-attached-pod -i -c=container1' Jan 29 15:13:08.098: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 29 15:13:08.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8518" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8518-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":56,"skipped":1203,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 29 15:13:02.448: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 29 15:13:02.490: INFO: Creating deployment "webserver-deployment" Jan 29 15:13:02.500: INFO: Waiting for observed generation 1 Jan 29 15:13:04.540: INFO: Waiting for all required pods to come up Jan 29 15:13:04.565: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Jan 29 15:13:06.667: INFO: Waiting for deployment "webserver-deployment" to complete Jan 29 15:13:06.686: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 29 15:13:06.710: INFO: Updating deployment webserver-deployment Jan 29 15:13:06.710: INFO: Waiting for observed generation 2 Jan 29 15:13:08.807: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 29 15:13:08.898: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 29 15:13:08.914: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 29 15:13:09.056: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 29 15:13:09.056: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 29 15:13:09.089: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 29 15:13:09.219: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 29 15:13:09.219: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 29 15:13:09.392: INFO: Updating deployment webserver-deployment Jan 29 15:13:09.393: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 29 15:13:09.544: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 29 15:13:11.676: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 29 15:13:11.987: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3971 a4b9d92c-53ad-4607-822c-14a00295dbb8 17468 3 2023-01-29 15:13:02 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-29 15:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00293fbe8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-29 15:13:09 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-566f96c878" is progressing.,LastUpdateTime:2023-01-29 15:13:10 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 29 15:13:12.044: INFO: New ReplicaSet "webserver-deployment-566f96c878" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-566f96c878 deployment-3971 81827e09-ba76-4182-adbf-912dc5ef25ca 17456 3 2023-01-29 15:13:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment a4b9d92c-53ad-4607-822c-14a00295dbb8 0xc00293ffd7 0xc00293ffd8}] [] [{kube-controller-manager Update apps/v1 2023-01-29 15:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4b9d92c-53ad-4607-822c-14a00295dbb8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:13:06 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 566f96c878,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00346c128 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 29 15:13:12.045: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 29 15:13:12.045: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-3971 c208afb0-2d04-4590-9239-ca118965e88c 17426 3 2023-01-29 15:13:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment a4b9d92c-53ad-4607-822c-14a00295dbb8 0xc00346c187 0xc00346c188}] [] [{kube-controller-manager Update apps/v1 2023-01-29 15:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4b9d92c-53ad-4607-822c-14a00295dbb8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-29 15:13:04 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00346c218 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 29 15:13:12.353: INFO: Pod "webserver-deployment-566f96c878-2vrlw" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-2vrlw webserver-deployment-566f96c878- deployment-3971 0087ac58-dcb7-4a41-ab8e-4ac0c35dc640 17317 0 2023-01-29 15:13:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc000f291e7 0xc000f291e8}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.143\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9jxfb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jxfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-693qzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.143,StartTime:2023-01-29 15:13:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.354: INFO: Pod "webserver-deployment-566f96c878-4nr5f" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-4nr5f webserver-deployment-566f96c878- deployment-3971 5f490706-6d7e-451a-a0d9-9710032f255c 17463 0 2023-01-29 15:13:10 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc000f29410 0xc000f29411}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7njt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7njt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-693qzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-29 15:13:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.355: INFO: Pod "webserver-deployment-566f96c878-6bp6x" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-6bp6x webserver-deployment-566f96c878- deployment-3971 98c6a2dc-a056-4bc8-baa6-67a0fdd6a7ac 17415 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc000f295e0 0xc000f295e1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hlzm2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hlzm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-29 15:13:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.362: INFO: Pod "webserver-deployment-566f96c878-6xxp6" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-6xxp6 webserver-deployment-566f96c878- deployment-3971 31b2719a-8b91-4d0e-8258-e96644f0979d 17452 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc000f297b0 0xc000f297b1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xgx5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xgx5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-biy623,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-29 15:13:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.373: INFO: Pod "webserver-deployment-566f96c878-b9kxc" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-b9kxc webserver-deployment-566f96c878- deployment-3971 dfe39a8d-ad38-4b44-b1ef-4b3c46b4c8c6 17324 0 2023-01-29 15:13:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc000f29980 0xc000f29981}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.127\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wc7rh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wc7rh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-biy623,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.127,StartTime:2023-01-29 15:13:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.127,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.374: INFO: Pod "webserver-deployment-566f96c878-bgptj" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-bgptj webserver-deployment-566f96c878- deployment-3971 93340745-8ff1-4369-8d30-006f24f85b60 17467 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc000f29b90 0xc000f29b91}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hflbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hflbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-29 15:13:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.392: INFO: Pod "webserver-deployment-566f96c878-f6d6b" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-f6d6b webserver-deployment-566f96c878- deployment-3971 3358ca6b-60e7-45b6-9098-dd4e7b6338ab 17458 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc000f29d60 0xc000f29d61}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j8ttt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8ttt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-693qzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-29 15:13:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.393: INFO: Pod "webserver-deployment-566f96c878-gc4fx" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-gc4fx webserver-deployment-566f96c878- deployment-3971 02378046-33ad-42fd-a897-cfc72270bd16 17469 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc000f29f30 0xc000f29f31}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vqlbq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vqlbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-29 15:13:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.393: INFO: Pod "webserver-deployment-566f96c878-grj48" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-grj48 webserver-deployment-566f96c878- deployment-3971 5951dcac-ac55-476c-9bf2-eea68ae50e3d 17450 0 2023-01-29 15:13:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc003a18100 0xc003a18101}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.104\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-28z9h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-28z9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.1.104,StartTime:2023-01-29 15:13:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.394: INFO: Pod "webserver-deployment-566f96c878-h47jk" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-h47jk webserver-deployment-566f96c878- deployment-3971 afde2800-3133-4652-b81b-b79c3f180745 17432 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc003a18300 0xc003a18301}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p2nb4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p2nb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-693qzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-29 15:13:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.404: INFO: Pod "webserver-deployment-566f96c878-l2jrd" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-l2jrd webserver-deployment-566f96c878- deployment-3971 9f538495-9097-4025-b3f2-ef630b478eb0 17428 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc003a184d0 0xc003a184d1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hw29f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hw29f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-biy623,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-29 15:13:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.405: INFO: Pod "webserver-deployment-566f96c878-ltvlq" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-ltvlq webserver-deployment-566f96c878- deployment-3971 157b0701-3eee-4b77-8db0-ba9b707bd2b6 17334 0 2023-01-29 15:13:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc003a186a0 0xc003a186a1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.151\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sjjlt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sjjlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.151,StartTime:2023-01-29 15:13:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.151,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.405: INFO: Pod "webserver-deployment-566f96c878-zk5j9" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-zk5j9 webserver-deployment-566f96c878- deployment-3971 22607b3b-fd0c-4e16-a3b6-a2161edf2f7e 17338 0 2023-01-29 15:13:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 81827e09-ba76-4182-adbf-912dc5ef25ca 0xc003a188a0 0xc003a188a1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81827e09-ba76-4182-adbf-912dc5ef25ca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s2btc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2btc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.1.103,StartTime:2023-01-29 15:13:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.407: INFO: Pod "webserver-deployment-5d9fdcc779-4flfw" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-4flfw webserver-deployment-5d9fdcc779- deployment-3971 b100a0eb-671b-4d68-832c-3490e458c621 17199 0 2023-01-29 15:13:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc003a18aa0 0xc003a18aa1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.141\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n6n5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n6n5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-693qzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.141,StartTime:2023-01-29 15:13:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:13:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://27bfa8d693d93fd786783dc78f21960e5318ba4aafe0dc70d502dae8fe8f1dd6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.141,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.423: INFO: Pod "webserver-deployment-5d9fdcc779-5gcrv" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-5gcrv webserver-deployment-5d9fdcc779- deployment-3971 8fb4f1f9-8b0a-41dd-acb9-8b42919174d0 17231 0 2023-01-29 15:13:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc003a18c70 0xc003a18c71}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.142\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qb6z8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qb6z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-693qzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.142,StartTime:2023-01-29 15:13:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:13:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://f1a706b968e15efdcbffc928a6afa14bffc813e70c044bdc30da70bb668e7890,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.142,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.424: INFO: Pod "webserver-deployment-5d9fdcc779-5p86f" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-5p86f webserver-deployment-5d9fdcc779- deployment-3971 6b66c922-0629-4271-982f-2fc01a91de41 17441 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc003a18e40 0xc003a18e41}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7ltgl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ltgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-29 15:13:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.425: INFO: Pod "webserver-deployment-5d9fdcc779-72bsp" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-72bsp webserver-deployment-5d9fdcc779- deployment-3971 90ffa108-6b31-4ad8-a2a6-9eb1a7bba5e1 17405 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc003a19420 0xc003a19421}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vxjz7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxjz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-biy623,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-29 15:13:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.447: INFO: Pod "webserver-deployment-5d9fdcc779-9xlcr" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-9xlcr webserver-deployment-5d9fdcc779- deployment-3971 9e8e9b62-8344-4017-9323-04acc441423f 17382 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc003a19ac0 0xc003a19ac1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j2nn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2nn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-29 15:13:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.448: INFO: Pod "webserver-deployment-5d9fdcc779-cgzdq" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-cgzdq webserver-deployment-5d9fdcc779- deployment-3971 552053ca-66a8-409d-b3e5-2836132c17f3 17208 0 2023-01-29 15:13:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc003a19c70 0xc003a19c71}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.140\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lk8nj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lk8nj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-693qzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.140,StartTime:2023-01-29 15:13:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:13:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://6af2b1c222905e0dc45146b373c67b219ff2b32b9606db32312db7f21935dbfc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.140,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.449: INFO: Pod "webserver-deployment-5d9fdcc779-cp8lc" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-cp8lc webserver-deployment-5d9fdcc779- deployment-3971 43957c37-eb60-41e1-b169-61965b7fede2 17465 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc003a19e40 0xc003a19e41}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-22sg8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22sg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-biy623,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-29 15:13:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.450: INFO: Pod "webserver-deployment-5d9fdcc779-fmpc9" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-fmpc9 webserver-deployment-5d9fdcc779- deployment-3971 e888544e-a218-465b-9fc1-e0f130bf26e4 17438 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc003a19ff0 0xc003a19ff1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dmmdz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmmdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-29 15:13:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.450: INFO: Pod "webserver-deployment-5d9fdcc779-g6726" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-g6726 webserver-deployment-5d9fdcc779- deployment-3971 9f90e1da-2670-4cb1-bb06-30674bbb14b1 17430 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325c1a0 0xc00325c1a1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6mrnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mrnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-29 15:13:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.455: INFO: Pod "webserver-deployment-5d9fdcc779-hcz4p" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-hcz4p webserver-deployment-5d9fdcc779- deployment-3971 8d436132-cf60-4217-8d97-02dd5ddbd5f4 17453 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325c440 0xc00325c441}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fj6c7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fj6c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-29 15:13:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.456: INFO: Pod "webserver-deployment-5d9fdcc779-k9w6j" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-k9w6j webserver-deployment-5d9fdcc779- deployment-3971 1e82fc05-d97c-4c6f-9070-28e4e06069b7 17150 0 2023-01-29 15:13:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325c5f0 0xc00325c5f1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zzg4j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzg4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.1.100,StartTime:2023-01-29 15:13:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:13:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://9972bc2937bcfc9116851960d4e621bc08d77c8473b4ac9f780155cc4ba5a303,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.457: INFO: Pod "webserver-deployment-5d9fdcc779-l48wc" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-l48wc webserver-deployment-5d9fdcc779- deployment-3971 d9c03aad-e6c2-4d3c-8399-a3cb4dbd7bd7 17443 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325c7c0 0xc00325c7c1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nsckz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nsckz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-biy623,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-29 15:13:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.457: INFO: Pod "webserver-deployment-5d9fdcc779-llz74" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-llz74 webserver-deployment-5d9fdcc779- deployment-3971 f5f521e8-128f-4a9a-bbbe-5b346cdee3bb 17218 0 2023-01-29 15:13:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325c970 0xc00325c971}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.125\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lr2kw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lr2kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-biy623,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.125,StartTime:2023-01-29 15:13:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:13:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://2cb577cd34b9953e5214ad743f043dba816e1ff4cdf5187d73348e21a9eae1fc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.458: INFO: Pod "webserver-deployment-5d9fdcc779-m882p" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-m882p webserver-deployment-5d9fdcc779- deployment-3971 5cdeb906-f088-4c22-97a5-745b1884a2a2 17216 0 2023-01-29 15:13:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325cb60 0xc00325cb61}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.124\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tqdnb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqdnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-biy623,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.124,StartTime:2023-01-29 15:13:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:13:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://153561b515d9b8079ec2e86c7dac643f246946367fd4cd6debc10688f3ca241e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.458: INFO: Pod "webserver-deployment-5d9fdcc779-nf7v8" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-nf7v8 webserver-deployment-5d9fdcc779- deployment-3971 bc0e2649-5184-49d0-835b-ca1a7c261f98 17425 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325cd30 0xc00325cd31}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xbszw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xbszw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-qq527,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-29 15:13:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.459: INFO: Pod "webserver-deployment-5d9fdcc779-nzbsg" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-nzbsg webserver-deployment-5d9fdcc779- deployment-3971 8158e091-13a2-47e2-9240-1b5e10b737af 17408 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325cee0 0xc00325cee1}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nlrl6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nlrl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-693qzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-29 15:13:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.464: INFO: Pod "webserver-deployment-5d9fdcc779-t4tbd" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-t4tbd webserver-deployment-5d9fdcc779- deployment-3971 53191451-6ec7-402a-9830-18960ce2c7a6 17446 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325d090 0xc00325d091}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vnj9d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vnj9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-worker-693qzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-29 15:13:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.465: INFO: Pod "webserver-deployment-5d9fdcc779-xhtfh" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-xhtfh webserver-deployment-5d9fdcc779- deployment-3971 49d07028-2787-4432-ae59-12e96d59d9a8 17149 0 2023-01-29 15:13:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325d250 0xc00325d251}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.148\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-76nft,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-76nft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-pw1vby-md-0-f7x96-5c58bbc46-9hdfx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-29 15:13:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.148,StartTime:2023-01-29 15:13:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-29 15:13:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0147f526745ccf679b7e954665355a1f49e9e50ab027aa2085cc8f2032da5536,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.148,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 29 15:13:12.466: INFO: Pod "webserver-deployment-5d9fdcc779-zd8wk" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-zd8wk webserver-deployment-5d9fdcc779- deployment-3971 e034cc00-9b6b-4268-9a09-c0ded986608b 17460 0 2023-01-29 15:13:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c208afb0-2d04-4590-9239-ca118965e88c 0xc00325d420 0xc00325d421}] [] [{kube-controller-manager Update v1 2023-01-29 15:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c208afb0-2d04-4590-9239-ca118965e88c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-29 15:13:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zczk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zczk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:m