Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 2h1m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc000564a20>: { error: <*errors.withMessage | 0xc000622940>{ cause: <*errors.errorString | 0xc0007e8970>{ s: "error container run failed with exit code 137", }, msg: "Unable to run conformance tests", }, stack: [0x1a98018, 0x1adc429, 0x7b9731, 0x7b9125, 0x7b87fb, 0x7be569, 0x7bdf52, 0x7df031, 0x7ded56, 0x7de3a5, 0x7e07e5, 0x7ec9c9, 0x7ec7de, 0x1af7d32, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 137 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-rev1cp INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-rev1cp" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-ihjwwi" using the "upgrades-cgroupfs" template (Kubernetes v1.23.15, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-ihjwwi --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-ihjwwi-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-ihjwwi-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-ihjwwi-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-ihjwwi-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-ihjwwi created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-ihjwwi-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-ihjwwi-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-rev1cp/k8s-upgrade-and-conformance-ihjwwi-wgnbq to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-rev1cp/k8s-upgrade-and-conformance-ihjwwi-wgnbq to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.24.9 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-rev1cp/k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b to be upgraded to v1.24.9 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.9 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-rev1cp/k8s-upgrade-and-conformance-ihjwwi-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-rev1cp/k8s-upgrade-and-conformance-ihjwwi-mp-0 to be upgraded from v1.23.15 to v1.24.9 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.24.9 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1673699831�[0m - Will randomize all specs Will run �[1m6973�[0m specs Running in parallel across �[1m4�[0m nodes Jan 14 12:37:13.849: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:37:13.852: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 14 12:37:13.868: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 14 12:37:13.903: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:13.903: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:13.903: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:13.903: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:13.903: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:13.903: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:13.903: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 14 12:37:13.903: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:13.903: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:13.903: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:13.903: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:13.903: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:13.903: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:13.903: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:13.903: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:13.903: INFO: Jan 14 12:37:15.924: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:15.924: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:15.924: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:15.924: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:15.924: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:15.924: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:15.924: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Jan 14 12:37:15.924: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:15.924: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:15.924: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:15.924: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:15.924: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:15.924: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:15.924: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:15.924: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:15.924: INFO: Jan 14 12:37:17.928: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:17.928: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:17.928: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:17.928: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:17.928: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:17.928: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:17.928: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Jan 14 12:37:17.928: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:17.928: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:17.928: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:17.928: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:17.928: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:17.928: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:17.928: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:17.928: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:17.928: INFO: Jan 14 12:37:19.925: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:19.925: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:19.925: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:19.925: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:19.925: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:19.925: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:19.925: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Jan 14 12:37:19.925: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:19.925: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:19.925: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:19.925: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:19.925: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:19.925: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:19.925: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:19.925: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:19.925: INFO: Jan 14 12:37:21.924: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:21.924: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:21.924: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:21.924: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:21.924: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:21.924: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:21.924: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Jan 14 12:37:21.924: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:21.924: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:21.924: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:21.925: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:21.925: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:21.925: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:21.925: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:21.925: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:21.925: INFO: Jan 14 12:37:23.929: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:23.929: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:23.929: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:23.929: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:23.929: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:23.929: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:23.929: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Jan 14 12:37:23.929: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:23.929: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:23.929: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:23.930: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:23.930: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:23.930: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:23.930: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:23.930: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:23.930: INFO: Jan 14 12:37:25.925: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:25.925: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:25.925: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:25.926: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:25.926: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:25.926: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:25.926: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Jan 14 12:37:25.926: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:25.926: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:25.926: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:25.926: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:25.926: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:25.926: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:25.926: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:25.926: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:25.926: INFO: Jan 14 12:37:27.927: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:27.927: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:27.927: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:27.927: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:27.927: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:27.927: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:27.927: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Jan 14 12:37:27.927: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:27.927: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:27.927: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:27.927: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:27.927: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:27.927: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:27.927: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:27.927: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:27.927: INFO: Jan 14 12:37:29.927: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:29.927: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:29.927: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:29.927: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:29.927: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:29.927: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:29.927: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Jan 14 12:37:29.927: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:29.927: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:29.927: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:29.927: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:29.927: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:29.927: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:29.927: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:29.927: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:29.927: INFO: Jan 14 12:37:31.926: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:31.926: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:31.926: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:31.926: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:31.926: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:31.926: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:31.926: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Jan 14 12:37:31.926: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:31.926: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:31.926: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:31.926: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:31.926: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:31.926: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:31.926: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:31.926: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:31.926: INFO: Jan 14 12:37:33.924: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:33.924: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:33.924: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:33.924: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:33.924: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:33.924: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:33.924: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Jan 14 12:37:33.924: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:33.924: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:33.924: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:33.924: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:33.924: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:33.924: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:33.924: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:33.924: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:33.924: INFO: Jan 14 12:37:35.925: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:35.925: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:35.925: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:35.925: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:35.925: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:35.926: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:35.926: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Jan 14 12:37:35.926: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:35.926: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:35.926: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:35.926: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:35.926: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:35.926: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:35.926: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:35.926: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:35.926: INFO: Jan 14 12:37:37.928: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:37.928: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:37.928: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:37.928: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:37.928: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:37.928: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:37.928: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) Jan 14 12:37:37.928: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:37.928: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:37.928: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:37.928: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:37.928: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:37.928: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:37.928: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:37.928: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:37.928: INFO: Jan 14 12:37:39.926: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:39.926: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:39.926: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:39.926: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:39.926: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:39.926: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:39.926: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) Jan 14 12:37:39.926: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:39.926: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:39.926: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:39.926: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:39.926: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:39.926: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:39.926: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:39.926: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:39.926: INFO: Jan 14 12:37:41.927: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:41.927: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:41.927: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:41.927: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:41.927: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:41.927: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:41.927: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (28 seconds elapsed) Jan 14 12:37:41.927: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:41.927: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:41.927: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:41.927: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:41.927: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:41.927: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:41.927: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:41.927: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:41.927: INFO: Jan 14 12:37:43.927: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:43.927: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:43.927: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:43.927: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:43.927: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:43.927: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:43.927: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (30 seconds elapsed) Jan 14 12:37:43.928: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:43.928: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:43.928: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:43.928: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:43.928: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:43.928: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:43.928: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:43.928: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:43.928: INFO: Jan 14 12:37:45.926: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:45.926: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:45.926: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:45.926: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:45.926: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:45.926: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:45.926: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (32 seconds elapsed) Jan 14 12:37:45.926: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:45.926: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:45.926: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:45.926: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:45.926: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:45.926: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:45.926: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:45.926: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:45.926: INFO: Jan 14 12:37:47.928: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:47.928: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:47.928: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:47.928: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:47.928: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:47.928: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:47.928: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (34 seconds elapsed) Jan 14 12:37:47.928: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:47.928: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:47.928: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:47.928: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:47.928: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:47.928: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:47.928: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:47.928: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:47.928: INFO: Jan 14 12:37:49.934: INFO: The status of Pod coredns-bd6b6df9f-8zdqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:49.934: INFO: The status of Pod coredns-bd6b6df9f-ld7zt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:49.934: INFO: The status of Pod kindnet-m2l9j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:49.934: INFO: The status of Pod kindnet-s87hw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:49.934: INFO: The status of Pod kube-proxy-8rrgq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:49.934: INFO: The status of Pod kube-proxy-q9gzg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:49.934: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (36 seconds elapsed) Jan 14 12:37:49.934: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:49.934: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:49.934: INFO: coredns-bd6b6df9f-8zdqx k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:10 +0000 UTC }] Jan 14 12:37:49.934: INFO: coredns-bd6b6df9f-ld7zt k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:35:53 +0000 UTC }] Jan 14 12:37:49.934: INFO: kindnet-m2l9j k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:04 +0000 UTC }] Jan 14 12:37:49.934: INFO: kindnet-s87hw k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:30:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:29:53 +0000 UTC }] Jan 14 12:37:49.934: INFO: kube-proxy-8rrgq k8s-upgrade-and-conformance-ihjwwi-worker-cydfij Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:33:37 +0000 UTC }] Jan 14 12:37:49.934: INFO: kube-proxy-q9gzg k8s-upgrade-and-conformance-ihjwwi-worker-bxfri7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:36:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:34:27 +0000 UTC }] Jan 14 12:37:49.934: INFO: Jan 14 12:37:51.922: INFO: The status of Pod coredns-bd6b6df9f-2tf8v is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:51.922: INFO: The status of Pod coredns-bd6b6df9f-t5mmm is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:51.922: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (38 seconds elapsed) Jan 14 12:37:51.922: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 14 12:37:51.922: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:51.922: INFO: coredns-bd6b6df9f-2tf8v k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC }] Jan 14 12:37:51.922: INFO: coredns-bd6b6df9f-t5mmm k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC }] Jan 14 12:37:51.922: INFO: Jan 14 12:37:53.923: INFO: The status of Pod coredns-bd6b6df9f-t5mmm is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 14 12:37:53.923: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (40 seconds elapsed) Jan 14 12:37:53.923: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 14 12:37:53.923: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:37:53.923: INFO: coredns-bd6b6df9f-t5mmm k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:37:51 +0000 UTC }] Jan 14 12:37:53.923: INFO: Jan 14 12:37:55.922: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (42 seconds elapsed) Jan 14 12:37:55.922: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 14 12:37:55.922: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 14 12:37:55.927: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 14 12:37:55.927: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 14 12:37:55.927: INFO: e2e test version: v1.24.9 Jan 14 12:37:55.928: INFO: kube-apiserver version: v1.24.9 Jan 14 12:37:55.929: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:37:55.935: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 14 12:37:55.930: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:37:55.944: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 14 12:37:55.974: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:37:55.988: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 14 12:37:55.974: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:37:55.995: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:37:55.953: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl W0114 12:37:55.976458 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 14 12:37:55.976: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:245 [It] should check if v1 is in available api versions [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: validating api versions Jan 14 12:37:55.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1538 api-versions' Jan 14 12:37:56.075: INFO: stderr: "" Jan 14 12:37:56.075: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:188 Jan 14 12:37:56.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1538" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:37:56.024: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected W0114 12:37:56.042922 14 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 14 12:37:56.042: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide container's cpu limit [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 14 12:37:56.060: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8427e4a-8244-478e-9353-2191e59617cb" in namespace "projected-3925" to be "Succeeded or Failed" Jan 14 12:37:56.065: INFO: Pod "downwardapi-volume-c8427e4a-8244-478e-9353-2191e59617cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.559603ms Jan 14 12:37:58.071: INFO: Pod "downwardapi-volume-c8427e4a-8244-478e-9353-2191e59617cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010662115s Jan 14 12:38:00.077: INFO: Pod "downwardapi-volume-c8427e4a-8244-478e-9353-2191e59617cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016733051s Jan 14 12:38:02.082: INFO: Pod "downwardapi-volume-c8427e4a-8244-478e-9353-2191e59617cb": Phase="Running", Reason="", readiness=false. Elapsed: 6.021859931s Jan 14 12:38:04.087: INFO: Pod "downwardapi-volume-c8427e4a-8244-478e-9353-2191e59617cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.026820803s �[1mSTEP�[0m: Saw pod success Jan 14 12:38:04.087: INFO: Pod "downwardapi-volume-c8427e4a-8244-478e-9353-2191e59617cb" satisfied condition "Succeeded or Failed" Jan 14 12:38:04.093: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod downwardapi-volume-c8427e4a-8244-478e-9353-2191e59617cb container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:38:04.121: INFO: Waiting for pod downwardapi-volume-c8427e4a-8244-478e-9353-2191e59617cb to disappear Jan 14 12:38:04.124: INFO: Pod downwardapi-volume-c8427e4a-8244-478e-9353-2191e59617cb no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 Jan 14 12:38:04.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3925" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:37:56.108: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating secret with name secret-test-7db88a16-6e9f-4701-a41d-b891955aac1c �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 14 12:37:56.139: INFO: Waiting up to 5m0s for pod "pod-secrets-cc593a77-2fd1-4e56-97f6-c57c59cdaf40" in namespace "secrets-7006" to be "Succeeded or Failed" Jan 14 12:37:56.145: INFO: Pod "pod-secrets-cc593a77-2fd1-4e56-97f6-c57c59cdaf40": Phase="Pending", Reason="", readiness=false. Elapsed: 5.829813ms Jan 14 12:37:58.150: INFO: Pod "pod-secrets-cc593a77-2fd1-4e56-97f6-c57c59cdaf40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010860624s Jan 14 12:38:00.484: INFO: Pod "pod-secrets-cc593a77-2fd1-4e56-97f6-c57c59cdaf40": Phase="Running", Reason="", readiness=true. Elapsed: 4.345407131s Jan 14 12:38:02.488: INFO: Pod "pod-secrets-cc593a77-2fd1-4e56-97f6-c57c59cdaf40": Phase="Running", Reason="", readiness=false. Elapsed: 6.349475607s Jan 14 12:38:04.498: INFO: Pod "pod-secrets-cc593a77-2fd1-4e56-97f6-c57c59cdaf40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.358821526s �[1mSTEP�[0m: Saw pod success Jan 14 12:38:04.498: INFO: Pod "pod-secrets-cc593a77-2fd1-4e56-97f6-c57c59cdaf40" satisfied condition "Succeeded or Failed" Jan 14 12:38:04.502: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk pod pod-secrets-cc593a77-2fd1-4e56-97f6-c57c59cdaf40 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:38:04.532: INFO: Waiting for pod pod-secrets-cc593a77-2fd1-4e56-97f6-c57c59cdaf40 to disappear Jan 14 12:38:04.535: INFO: Pod pod-secrets-cc593a77-2fd1-4e56-97f6-c57c59cdaf40 no longer exists [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:188 Jan 14 12:38:04.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-7006" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:04.220: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:191 [It] should contain environment variables for services [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:38:04.253: INFO: The status of Pod server-envvars-961db88d-cbc0-4194-b651-3fadf027952d is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:38:06.259: INFO: The status of Pod server-envvars-961db88d-cbc0-4194-b651-3fadf027952d is Running (Ready = true) Jan 14 12:38:06.284: INFO: Waiting up to 5m0s for pod "client-envvars-cd6ab889-33fb-4dff-89a1-eb269e84ef2d" in namespace "pods-537" to be "Succeeded or Failed" Jan 14 12:38:06.291: INFO: Pod "client-envvars-cd6ab889-33fb-4dff-89a1-eb269e84ef2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.845495ms Jan 14 12:38:08.296: INFO: Pod "client-envvars-cd6ab889-33fb-4dff-89a1-eb269e84ef2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011909937s Jan 14 12:38:10.301: INFO: Pod "client-envvars-cd6ab889-33fb-4dff-89a1-eb269e84ef2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016441126s �[1mSTEP�[0m: Saw pod success Jan 14 12:38:10.301: INFO: Pod "client-envvars-cd6ab889-33fb-4dff-89a1-eb269e84ef2d" satisfied condition "Succeeded or Failed" Jan 14 12:38:10.304: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod client-envvars-cd6ab889-33fb-4dff-89a1-eb269e84ef2d container env3cont: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:38:10.318: INFO: Waiting for pod client-envvars-cd6ab889-33fb-4dff-89a1-eb269e84ef2d to disappear Jan 14 12:38:10.322: INFO: Pod client-envvars-cd6ab889-33fb-4dff-89a1-eb269e84ef2d no longer exists [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:188 Jan 14 12:38:10.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-537" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":89,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:10.337: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet test/e2e/common/node/kubelet.go:40 [BeforeEach] when scheduling a busybox command that always fails in a pod test/e2e/common/node/kubelet.go:84 [It] should be possible to delete [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [AfterEach] [sig-node] Kubelet test/e2e/framework/framework.go:188 Jan 14 12:38:10.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-6838" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":91,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:37:55.947: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota W0114 12:37:55.976193 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 14 12:37:55.976: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Discovering how many secrets are in namespace by default �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Secret �[1mSTEP�[0m: Ensuring resource quota status captures secret creation �[1mSTEP�[0m: Deleting a secret �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:188 Jan 14 12:38:12.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-3399" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:10.397: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should update labels on modification [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating the pod Jan 14 12:38:10.429: INFO: The status of Pod labelsupdate7ac3f22e-6d02-4eb7-b29a-ca5387293906 is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:38:12.433: INFO: The status of Pod labelsupdate7ac3f22e-6d02-4eb7-b29a-ca5387293906 is Running (Ready = true) Jan 14 12:38:12.952: INFO: Successfully updated pod "labelsupdate7ac3f22e-6d02-4eb7-b29a-ca5387293906" [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 Jan 14 12:38:16.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4413" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":95,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:16.992: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet test/e2e/common/node/kubelet.go:40 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:38:17.029: INFO: The status of Pod busybox-host-aliases1926a5e6-2c16-4847-a5a7-8260961b2cc6 is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:38:19.035: INFO: The status of Pod busybox-host-aliases1926a5e6-2c16-4847-a5a7-8260961b2cc6 is Running (Ready = true) [AfterEach] [sig-node] Kubelet test/e2e/framework/framework.go:188 Jan 14 12:38:19.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-3539" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":100,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:19.127: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: set up a multi version CRD Jan 14 12:38:19.146: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jan 14 12:38:33.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-5840" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":6,"skipped":144,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:12.068: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes test/e2e/storage/subpath.go:40 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-chd4 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 14 12:38:12.114: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-chd4" in namespace "subpath-9399" to be "Succeeded or Failed" Jan 14 12:38:12.117: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.102152ms Jan 14 12:38:14.121: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=true. Elapsed: 2.006711221s Jan 14 12:38:16.125: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=true. Elapsed: 4.010444118s Jan 14 12:38:18.129: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=true. Elapsed: 6.015133336s Jan 14 12:38:20.134: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=true. Elapsed: 8.019557732s Jan 14 12:38:22.140: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=true. Elapsed: 10.026070741s Jan 14 12:38:24.145: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=true. Elapsed: 12.030560147s Jan 14 12:38:26.149: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=true. Elapsed: 14.034453262s Jan 14 12:38:28.153: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=true. Elapsed: 16.039126017s Jan 14 12:38:30.158: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=true. Elapsed: 18.043764429s Jan 14 12:38:32.163: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=true. Elapsed: 20.048390228s Jan 14 12:38:34.167: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Running", Reason="", readiness=false. Elapsed: 22.053102155s Jan 14 12:38:36.171: INFO: Pod "pod-subpath-test-configmap-chd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05663175s �[1mSTEP�[0m: Saw pod success Jan 14 12:38:36.171: INFO: Pod "pod-subpath-test-configmap-chd4" satisfied condition "Succeeded or Failed" Jan 14 12:38:36.174: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk pod pod-subpath-test-configmap-chd4 container test-container-subpath-configmap-chd4: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:38:36.187: INFO: Waiting for pod pod-subpath-test-configmap-chd4 to disappear Jan 14 12:38:36.191: INFO: Pod pod-subpath-test-configmap-chd4 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-chd4 Jan 14 12:38:36.191: INFO: Deleting pod "pod-subpath-test-configmap-chd4" in namespace "subpath-9399" [AfterEach] [sig-storage] Subpath test/e2e/framework/framework.go:188 Jan 14 12:38:36.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-9399" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected combined test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:33.932: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating configMap with name configmap-projected-all-test-volume-de29790a-cf9a-4b5a-9b82-3d4c65d41fbf �[1mSTEP�[0m: Creating secret with name secret-projected-all-test-volume-63b14b60-d04f-42c6-b394-4297b8d0d1ab �[1mSTEP�[0m: Creating a pod to test Check all projections for projected volume plugin Jan 14 12:38:33.969: INFO: Waiting up to 5m0s for pod "projected-volume-4f22896e-1b5c-4ad5-8d83-f9ee4e965ff7" in namespace "projected-1937" to be "Succeeded or Failed" Jan 14 12:38:33.972: INFO: Pod "projected-volume-4f22896e-1b5c-4ad5-8d83-f9ee4e965ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.582756ms Jan 14 12:38:35.977: INFO: Pod "projected-volume-4f22896e-1b5c-4ad5-8d83-f9ee4e965ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007107682s Jan 14 12:38:37.981: INFO: Pod "projected-volume-4f22896e-1b5c-4ad5-8d83-f9ee4e965ff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011678659s �[1mSTEP�[0m: Saw pod success Jan 14 12:38:37.981: INFO: Pod "projected-volume-4f22896e-1b5c-4ad5-8d83-f9ee4e965ff7" satisfied condition "Succeeded or Failed" Jan 14 12:38:37.984: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod projected-volume-4f22896e-1b5c-4ad5-8d83-f9ee4e965ff7 container projected-all-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:38:37.996: INFO: Waiting for pod projected-volume-4f22896e-1b5c-4ad5-8d83-f9ee4e965ff7 to disappear Jan 14 12:38:37.999: INFO: Pod projected-volume-4f22896e-1b5c-4ad5-8d83-f9ee4e965ff7 no longer exists [AfterEach] [sig-storage] Projected combined test/e2e/framework/framework.go:188 Jan 14 12:38:37.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1937" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":152,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:38.016: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment test/e2e/apps/deployment.go:91 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:38:38.035: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 14 12:38:38.042: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 14 12:38:43.046: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 14 12:38:43.046: INFO: Creating deployment "test-rolling-update-deployment" Jan 14 12:38:43.051: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 14 12:38:43.061: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 14 12:38:45.070: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 14 12:38:45.073: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment test/e2e/apps/deployment.go:84 Jan 14 12:38:45.083: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-980 bddeae99-a3ec-4da5-a2e8-8174882cca07 2692 1 2023-01-14 12:38:43 +0000 UTC <nil> <nil> map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-01-14 12:38:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:38:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055a0a48 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 12:38:43 +0000 UTC,LastTransitionTime:2023-01-14 12:38:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67c8f74c6c" has successfully progressed.,LastUpdateTime:2023-01-14 12:38:44 +0000 UTC,LastTransitionTime:2023-01-14 12:38:43 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 14 12:38:45.087: INFO: New ReplicaSet "test-rolling-update-deployment-67c8f74c6c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67c8f74c6c deployment-980 2221eaeb-b546-48b9-bea3-5dcdad098c82 2683 1 2023-01-14 12:38:43 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:67c8f74c6c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment bddeae99-a3ec-4da5-a2e8-8174882cca07 0xc0055a0ed7 0xc0055a0ed8}] [] [{kube-controller-manager Update apps/v1 2023-01-14 12:38:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bddeae99-a3ec-4da5-a2e8-8174882cca07\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:38:44 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67c8f74c6c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:67c8f74c6c] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055a0f88 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 14 12:38:45.087: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 14 12:38:45.087: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-980 93b4a394-5530-4bd8-9d35-dd931c79e2e4 2691 2 2023-01-14 12:38:38 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment bddeae99-a3ec-4da5-a2e8-8174882cca07 0xc0055a0daf 0xc0055a0dc0}] [] [{e2e.test Update apps/v1 2023-01-14 12:38:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:38:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bddeae99-a3ec-4da5-a2e8-8174882cca07\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:38:44 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0055a0e78 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 14 12:38:45.091: INFO: Pod "test-rolling-update-deployment-67c8f74c6c-qqbnk" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67c8f74c6c-qqbnk test-rolling-update-deployment-67c8f74c6c- deployment-980 6b3c84ac-9c71-45a5-9ebe-dedc450f8479 2682 0 2023-01-14 12:38:43 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:67c8f74c6c] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67c8f74c6c 2221eaeb-b546-48b9-bea3-5dcdad098c82 0xc005702797 0xc005702798}] [] [{kube-controller-manager Update v1 2023-01-14 12:38:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2221eaeb-b546-48b9-bea3-5dcdad098c82\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:38:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-l5r24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l5r24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:38:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:38:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:38:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:38:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.4,StartTime:2023-01-14 12:38:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:38:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://2ff644770d2896a43fdddee1be5571d0bba4ebab28c848e4c490614e9641f9b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment test/e2e/framework/framework.go:188 Jan 14 12:38:45.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-980" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":8,"skipped":157,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:45.142: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services test/e2e/network/service.go:758 [It] should be able to change the type from ExternalName to NodePort [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-2465 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-2465 I0114 12:38:45.191285 14 runners.go:193] Created replication controller with name: externalname-service, namespace: services-2465, replica count: 2 I0114 12:38:48.243702 14 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 14 12:38:48.243: INFO: Creating new exec pod Jan 14 12:38:51.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2465 exec execpodmts2f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 14 12:38:51.416: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 14 12:38:51.416: INFO: stdout: "externalname-service-4svw7" Jan 14 12:38:51.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2465 exec execpodmts2f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.117.199 80' Jan 14 12:38:51.570: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.129.117.199 80\nConnection to 10.129.117.199 80 port [tcp/http] succeeded!\n" Jan 14 12:38:51.570: INFO: stdout: "" Jan 14 12:38:52.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2465 exec execpodmts2f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.117.199 80' Jan 14 12:38:52.736: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.129.117.199 80\nConnection to 10.129.117.199 80 port [tcp/http] succeeded!\n" Jan 14 12:38:52.736: INFO: stdout: "" Jan 14 12:38:53.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2465 exec execpodmts2f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.117.199 80' Jan 14 12:38:53.714: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.129.117.199 80\nConnection to 10.129.117.199 80 port [tcp/http] succeeded!\n" Jan 14 12:38:53.714: INFO: stdout: "externalname-service-q4j5d" Jan 14 12:38:53.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2465 exec execpodmts2f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 30885' Jan 14 12:38:53.880: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 30885\nConnection to 172.18.0.6 30885 port [tcp/*] succeeded!\n" Jan 14 12:38:53.880: INFO: stdout: "externalname-service-4svw7" Jan 14 12:38:53.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2465 exec execpodmts2f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 30885' Jan 14 12:38:54.031: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 30885\nConnection to 172.18.0.4 30885 port [tcp/*] succeeded!\n" Jan 14 12:38:54.031: INFO: stdout: "externalname-service-q4j5d" Jan 14 12:38:54.032: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services test/e2e/framework/framework.go:188 Jan 14 12:38:54.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2465" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":9,"skipped":189,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:54.110: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Create set of pod templates Jan 14 12:38:54.172: INFO: created test-podtemplate-1 Jan 14 12:38:54.184: INFO: created test-podtemplate-2 Jan 14 12:38:54.193: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Jan 14 12:38:54.211: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Jan 14 12:38:54.243: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates test/e2e/framework/framework.go:188 Jan 14 12:38:54.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-4142" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":10,"skipped":207,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:37:55.993: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook W0114 12:37:56.012987 15 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 14 12:37:56.013: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:89 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 14 12:37:56.566: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 14 12:37:58.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 12, 37, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 12, 37, 56, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 12, 37, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 12, 37, 56, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 14 12:38:00.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 12, 37, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 12, 37, 56, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 12, 37, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 12, 37, 56, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 14 12:38:03.599: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:38:03.603: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API Jan 14 12:38:14.131: INFO: Waiting for webhook configuration to be ready... Jan 14 12:38:24.240: INFO: Waiting for webhook configuration to be ready... Jan 14 12:38:34.345: INFO: Waiting for webhook configuration to be ready... Jan 14 12:38:44.540: INFO: Waiting for webhook configuration to be ready... Jan 14 12:38:54.554: INFO: Waiting for webhook configuration to be ready... Jan 14 12:38:54.554: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002382c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForCustomResource(0xc000ca8d80, {0xc00266e780, 0xc}, 0xc0023ab540, 0xc002453b00, 0x0?) test/e2e/apimachinery/webhook.go:1731 +0x805 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.6() test/e2e/apimachinery/webhook.go:226 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0x25634d7?) test/e2e/e2e.go:130 +0x686 k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f040, 0x73bdd00) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jan 14 12:38:55.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7267" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7267-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:104 �[91m�[1m• Failure [59.131 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90mtest/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny custom resource creation, update and deletion [Conformance] [It]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 14 12:38:54.554: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002382c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m test/e2e/apimachinery/webhook.go:1731 �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:54.276: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should update annotations on modification [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating the pod Jan 14 12:38:54.317: INFO: The status of Pod annotationupdate110ee50b-6eed-434f-b078-e5ca3ebc0688 is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:38:56.321: INFO: The status of Pod annotationupdate110ee50b-6eed-434f-b078-e5ca3ebc0688 is Running (Ready = true) Jan 14 12:38:56.849: INFO: Successfully updated pod "annotationupdate110ee50b-6eed-434f-b078-e5ca3ebc0688" [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 Jan 14 12:39:00.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2793" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":218,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":0,"skipped":2,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:55.126: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:89 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 14 12:38:55.500: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 14 12:38:58.526: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:38:58.530: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be denied by the webhook �[1mSTEP�[0m: Creating a custom resource whose deletion would be denied by the webhook �[1mSTEP�[0m: Updating the custom resource with disallowed data should be denied �[1mSTEP�[0m: Deleting the custom resource should be denied �[1mSTEP�[0m: Remove the offending key and value from the custom resource data �[1mSTEP�[0m: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jan 14 12:39:01.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4722" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4722-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:104 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":1,"skipped":2,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:00.926: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide container's memory request [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 14 12:39:00.954: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b474c827-7829-432f-bcd4-30dd29c090c8" in namespace "projected-7810" to be "Succeeded or Failed" Jan 14 12:39:00.959: INFO: Pod "downwardapi-volume-b474c827-7829-432f-bcd4-30dd29c090c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100045ms Jan 14 12:39:02.963: INFO: Pod "downwardapi-volume-b474c827-7829-432f-bcd4-30dd29c090c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00861382s Jan 14 12:39:04.968: INFO: Pod "downwardapi-volume-b474c827-7829-432f-bcd4-30dd29c090c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013720796s �[1mSTEP�[0m: Saw pod success Jan 14 12:39:04.968: INFO: Pod "downwardapi-volume-b474c827-7829-432f-bcd4-30dd29c090c8" satisfied condition "Succeeded or Failed" Jan 14 12:39:04.971: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod downwardapi-volume-b474c827-7829-432f-bcd4-30dd29c090c8 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:39:04.991: INFO: Waiting for pod downwardapi-volume-b474c827-7829-432f-bcd4-30dd29c090c8 to disappear Jan 14 12:39:04.994: INFO: Pod downwardapi-volume-b474c827-7829-432f-bcd4-30dd29c090c8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 Jan 14 12:39:04.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7810" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":253,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:05.091: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-524b234f-ea29-42e7-9b4e-7f8cf07e1a1e �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 14 12:39:05.125: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29f7d0a9-1ad6-4c9f-a7bd-7bee1bd0fbff" in namespace "projected-6754" to be "Succeeded or Failed" Jan 14 12:39:05.137: INFO: Pod "pod-projected-configmaps-29f7d0a9-1ad6-4c9f-a7bd-7bee1bd0fbff": Phase="Pending", Reason="", readiness=false. Elapsed: 11.692649ms Jan 14 12:39:07.141: INFO: Pod "pod-projected-configmaps-29f7d0a9-1ad6-4c9f-a7bd-7bee1bd0fbff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015721033s Jan 14 12:39:09.145: INFO: Pod "pod-projected-configmaps-29f7d0a9-1ad6-4c9f-a7bd-7bee1bd0fbff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020664151s �[1mSTEP�[0m: Saw pod success Jan 14 12:39:09.146: INFO: Pod "pod-projected-configmaps-29f7d0a9-1ad6-4c9f-a7bd-7bee1bd0fbff" satisfied condition "Succeeded or Failed" Jan 14 12:39:09.149: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod pod-projected-configmaps-29f7d0a9-1ad6-4c9f-a7bd-7bee1bd0fbff container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:39:09.162: INFO: Waiting for pod pod-projected-configmaps-29f7d0a9-1ad6-4c9f-a7bd-7bee1bd0fbff to disappear Jan 14 12:39:09.167: INFO: Pod pod-projected-configmaps-29f7d0a9-1ad6-4c9f-a7bd-7bee1bd0fbff no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 Jan 14 12:39:09.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6754" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":307,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:09.314: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet test/e2e/common/node/kubelet.go:40 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:39:09.343: INFO: The status of Pod busybox-readonly-fsff7537d1-72f2-44f4-81f2-06c61888af40 is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:39:11.348: INFO: The status of Pod busybox-readonly-fsff7537d1-72f2-44f4-81f2-06c61888af40 is Running (Ready = true) [AfterEach] [sig-node] Kubelet test/e2e/framework/framework.go:188 Jan 14 12:39:11.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-4723" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":424,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:36.226: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: creating a file in subpath Jan 14 12:38:38.271: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-921 PodName:var-expansion-7f8f9f7b-3439-46b5-9b73-acd87c2d122e ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:38:38.271: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:38:38.272: INFO: ExecWithOptions: Clientset creation Jan 14 12:38:38.272: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/var-expansion-921/pods/var-expansion-7f8f9f7b-3439-46b5-9b73-acd87c2d122e/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP�[0m: test for file in mounted path Jan 14 12:38:38.360: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-921 PodName:var-expansion-7f8f9f7b-3439-46b5-9b73-acd87c2d122e ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:38:38.360: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:38:38.361: INFO: ExecWithOptions: Clientset creation Jan 14 12:38:38.361: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/var-expansion-921/pods/var-expansion-7f8f9f7b-3439-46b5-9b73-acd87c2d122e/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP�[0m: updating the annotation value Jan 14 12:38:38.926: INFO: Successfully updated pod "var-expansion-7f8f9f7b-3439-46b5-9b73-acd87c2d122e" �[1mSTEP�[0m: waiting for annotated pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 14 12:38:38.930: INFO: Deleting pod "var-expansion-7f8f9f7b-3439-46b5-9b73-acd87c2d122e" in namespace "var-expansion-921" Jan 14 12:38:38.936: INFO: Wait up to 5m0s for pod "var-expansion-7f8f9f7b-3439-46b5-9b73-acd87c2d122e" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 14 12:39:12.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-921" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:11.370: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide container's memory limit [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 14 12:39:11.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-200e4778-2f62-47c9-8fde-b87717482a4a" in namespace "projected-2966" to be "Succeeded or Failed" Jan 14 12:39:11.400: INFO: Pod "downwardapi-volume-200e4778-2f62-47c9-8fde-b87717482a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.66272ms Jan 14 12:39:13.405: INFO: Pod "downwardapi-volume-200e4778-2f62-47c9-8fde-b87717482a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007353869s Jan 14 12:39:15.410: INFO: Pod "downwardapi-volume-200e4778-2f62-47c9-8fde-b87717482a4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01223429s �[1mSTEP�[0m: Saw pod success Jan 14 12:39:15.410: INFO: Pod "downwardapi-volume-200e4778-2f62-47c9-8fde-b87717482a4a" satisfied condition "Succeeded or Failed" Jan 14 12:39:15.414: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c pod downwardapi-volume-200e4778-2f62-47c9-8fde-b87717482a4a container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:39:15.429: INFO: Waiting for pod downwardapi-volume-200e4778-2f62-47c9-8fde-b87717482a4a to disappear Jan 14 12:39:15.431: INFO: Pod downwardapi-volume-200e4778-2f62-47c9-8fde-b87717482a4a no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 Jan 14 12:39:15.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2966" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":426,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:15.497: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Hostname [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2801.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2801.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2801.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2801.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 14 12:39:23.608: INFO: DNS probes using dns-2801/dns-test-89ceaf6e-4512-406b-8f09-cabdfe5a5aa3 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS test/e2e/framework/framework.go:188 Jan 14 12:39:23.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-2801" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [Conformance]","total":-1,"completed":16,"skipped":472,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:23.692: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:245 [It] should add annotations for pods in rc [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating Agnhost RC Jan 14 12:39:23.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5932 create -f -' Jan 14 12:39:25.314: INFO: stderr: "" Jan 14 12:39:25.314: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 14 12:39:26.318: INFO: Selector matched 1 pods for map[app:agnhost] Jan 14 12:39:26.318: INFO: Found 0 / 1 Jan 14 12:39:27.321: INFO: Selector matched 1 pods for map[app:agnhost] Jan 14 12:39:27.321: INFO: Found 1 / 1 Jan 14 12:39:27.321: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Jan 14 12:39:27.326: INFO: Selector matched 1 pods for map[app:agnhost] Jan 14 12:39:27.326: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 14 12:39:27.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5932 patch pod agnhost-primary-bbdn9 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 14 12:39:27.493: INFO: stderr: "" Jan 14 12:39:27.493: INFO: stdout: "pod/agnhost-primary-bbdn9 patched\n" �[1mSTEP�[0m: checking annotations Jan 14 12:39:27.502: INFO: Selector matched 1 pods for map[app:agnhost] Jan 14 12:39:27.502: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:188 Jan 14 12:39:27.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5932" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":17,"skipped":477,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:27.528: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of events [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Create set of events Jan 14 12:39:27.597: INFO: created test-event-1 Jan 14 12:39:27.608: INFO: created test-event-2 Jan 14 12:39:27.615: INFO: created test-event-3 �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete collection of events Jan 14 12:39:27.621: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity Jan 14 12:39:27.654: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events test/e2e/framework/framework.go:188 Jan 14 12:39:27.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-4499" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":18,"skipped":478,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:27.725: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services test/e2e/network/service.go:758 [It] should complete a service status lifecycle [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating a Service �[1mSTEP�[0m: watching for the Service to be added Jan 14 12:39:27.795: INFO: Found Service test-service-skvqv in namespace services-3884 with labels: map[test-service-static:true] & ports [{http TCP <nil> 80 {0 80 } 0}] Jan 14 12:39:27.796: INFO: Service test-service-skvqv created �[1mSTEP�[0m: Getting /status Jan 14 12:39:27.813: INFO: Service test-service-skvqv has LoadBalancer: {[]} �[1mSTEP�[0m: patching the ServiceStatus �[1mSTEP�[0m: watching for the Service to be patched Jan 14 12:39:27.838: INFO: observed Service test-service-skvqv in namespace services-3884 with annotations: map[] & LoadBalancer: {[]} Jan 14 12:39:27.838: INFO: Found Service test-service-skvqv in namespace services-3884 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Jan 14 12:39:27.838: INFO: Service test-service-skvqv has service status patched �[1mSTEP�[0m: updating the ServiceStatus Jan 14 12:39:27.865: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Service to be updated Jan 14 12:39:27.877: INFO: Observed Service test-service-skvqv in namespace services-3884 with annotations: map[] & Conditions: {[]} Jan 14 12:39:27.878: INFO: Observed event: &Service{ObjectMeta:{test-service-skvqv services-3884 b265705f-8643-4008-8185-8f6f1054bc77 3304 0 2023-01-14 12:39:27 +0000 UTC <nil> <nil> map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-01-14 12:39:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-01-14 12:39:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.143.244.131,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.143.244.131],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Jan 14 12:39:27.879: INFO: Found Service test-service-skvqv in namespace services-3884 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Jan 14 12:39:27.879: INFO: Service test-service-skvqv has service status updated �[1mSTEP�[0m: patching the service �[1mSTEP�[0m: watching for the Service to be patched Jan 14 12:39:27.918: INFO: observed Service test-service-skvqv in namespace services-3884 with labels: map[test-service-static:true] Jan 14 12:39:27.919: INFO: observed Service test-service-skvqv in namespace services-3884 with labels: map[test-service-static:true] Jan 14 12:39:27.919: INFO: observed Service test-service-skvqv in namespace services-3884 with labels: map[test-service-static:true] Jan 14 12:39:27.919: INFO: Found Service test-service-skvqv in namespace services-3884 with labels: map[test-service:patched test-service-static:true] Jan 14 12:39:27.919: INFO: Service test-service-skvqv patched �[1mSTEP�[0m: deleting the service �[1mSTEP�[0m: watching for the Service to be deleted Jan 14 12:39:27.964: INFO: Observed event: ADDED Jan 14 12:39:27.964: INFO: Observed event: MODIFIED Jan 14 12:39:27.965: INFO: Observed event: MODIFIED Jan 14 12:39:27.965: INFO: Observed event: MODIFIED Jan 14 12:39:27.965: INFO: Found Service test-service-skvqv in namespace services-3884 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Jan 14 12:39:27.965: INFO: Service test-service-skvqv deleted [AfterEach] [sig-network] Services test/e2e/framework/framework.go:188 Jan 14 12:39:27.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3884" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":19,"skipped":492,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:12.994: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a ResourceQuota with best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a best-effort pod �[1mSTEP�[0m: Ensuring resource quota with best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not best effort ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a not best-effort pod �[1mSTEP�[0m: Ensuring resource quota with not best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with best effort scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:188 Jan 14 12:39:29.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-7398" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":4,"skipped":61,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Containers test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:29.191: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test override command Jan 14 12:39:29.296: INFO: Waiting up to 5m0s for pod "client-containers-30be1831-404e-4353-8dac-d54fe3dab537" in namespace "containers-2531" to be "Succeeded or Failed" Jan 14 12:39:29.309: INFO: Pod "client-containers-30be1831-404e-4353-8dac-d54fe3dab537": Phase="Pending", Reason="", readiness=false. Elapsed: 12.727244ms Jan 14 12:39:31.319: INFO: Pod "client-containers-30be1831-404e-4353-8dac-d54fe3dab537": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022412406s Jan 14 12:39:33.327: INFO: Pod "client-containers-30be1831-404e-4353-8dac-d54fe3dab537": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030498191s �[1mSTEP�[0m: Saw pod success Jan 14 12:39:33.327: INFO: Pod "client-containers-30be1831-404e-4353-8dac-d54fe3dab537" satisfied condition "Succeeded or Failed" Jan 14 12:39:33.332: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk pod client-containers-30be1831-404e-4353-8dac-d54fe3dab537 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:39:33.374: INFO: Waiting for pod client-containers-30be1831-404e-4353-8dac-d54fe3dab537 to disappear Jan 14 12:39:33.384: INFO: Pod client-containers-30be1831-404e-4353-8dac-d54fe3dab537 no longer exists [AfterEach] [sig-node] Containers test/e2e/framework/framework.go:188 Jan 14 12:39:33.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-2531" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":64,"failed":0} [BeforeEach] [sig-network] EndpointSliceMirroring test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:33.425: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslicemirroring �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring test/e2e/network/endpointslicemirroring.go:41 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: mirroring a new custom Endpoint Jan 14 12:39:33.528: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 �[1mSTEP�[0m: mirroring an update to a custom Endpoint �[1mSTEP�[0m: mirroring deletion of a custom Endpoint Jan 14 12:39:35.574: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring test/e2e/framework/framework.go:188 Jan 14 12:39:37.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslicemirroring-5775" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":6,"skipped":64,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:37.656: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename tables �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation test/e2e/apimachinery/table_conversion.go:49 [It] should return a 406 for a backend which does not implement metadata [Conformance] test/e2e/framework/framework.go:652 [AfterEach] [sig-api-machinery] Servers with support for Table transformation test/e2e/framework/framework.go:188 Jan 14 12:39:37.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "tables-4650" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":7,"skipped":85,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:37.754: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:89 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 14 12:39:38.275: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 14 12:39:41.312: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Creating a dummy validating-webhook-configuration object �[1mSTEP�[0m: Deleting the validating-webhook-configuration, which should be possible to remove �[1mSTEP�[0m: Creating a dummy mutating-webhook-configuration object �[1mSTEP�[0m: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jan 14 12:39:41.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6054" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6054-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:104 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":8,"skipped":98,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:01.762: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP�[0m: Creating service test in namespace statefulset-6757 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-6757 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-6757 Jan 14 12:39:01.844: INFO: Found 0 stateful pods, waiting for 1 Jan 14 12:39:11.850: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 14 12:39:11.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6757 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 14 12:39:12.026: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 14 12:39:12.026: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 14 12:39:12.026: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 14 12:39:12.031: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 14 12:39:22.035: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 14 12:39:22.036: INFO: Waiting for statefulset status.replicas updated to 0 Jan 14 12:39:22.052: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:39:22.053: INFO: ss-0 k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:01 +0000 UTC }] Jan 14 12:39:22.053: INFO: Jan 14 12:39:22.053: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 14 12:39:23.058: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996963129s Jan 14 12:39:24.065: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991539643s Jan 14 12:39:25.072: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984996476s Jan 14 12:39:26.124: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.930627074s Jan 14 12:39:27.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.925135291s Jan 14 12:39:28.145: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.918411066s Jan 14 12:39:29.156: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.904710868s Jan 14 12:39:30.172: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.892526433s Jan 14 12:39:31.179: INFO: Verifying statefulset ss doesn't scale past 3 for another 877.187453ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6757 Jan 14 12:39:32.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6757 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 14 12:39:32.507: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 14 12:39:32.507: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 14 12:39:32.507: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 14 12:39:32.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6757 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 14 12:39:32.908: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 14 12:39:32.909: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 14 12:39:32.909: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 14 12:39:32.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6757 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 14 12:39:33.370: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 14 12:39:33.370: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 14 12:39:33.370: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 14 12:39:33.384: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 14 12:39:33.384: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 14 12:39:33.384: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Scale down will not halt with unhealthy stateful pod Jan 14 12:39:33.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6757 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 14 12:39:33.852: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 14 12:39:33.853: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 14 12:39:33.853: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 14 12:39:33.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6757 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 14 12:39:34.175: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 14 12:39:34.175: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 14 12:39:34.175: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 14 12:39:34.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6757 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 14 12:39:34.606: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 14 12:39:34.606: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 14 12:39:34.606: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 14 12:39:34.606: INFO: Waiting for statefulset status.replicas updated to 0 Jan 14 12:39:34.625: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 14 12:39:44.639: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 14 12:39:44.639: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 14 12:39:44.639: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 14 12:39:44.669: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:39:44.669: INFO: ss-0 k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:01 +0000 UTC }] Jan 14 12:39:44.670: INFO: ss-1 k8s-upgrade-and-conformance-ihjwwi-worker-g557ne Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:22 +0000 UTC }] Jan 14 12:39:44.670: INFO: ss-2 k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:22 +0000 UTC }] Jan 14 12:39:44.670: INFO: Jan 14 12:39:44.670: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 14 12:39:45.677: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 12:39:45.677: INFO: ss-0 k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:01 +0000 UTC }] Jan 14 12:39:45.678: INFO: ss-1 k8s-upgrade-and-conformance-ihjwwi-worker-g557ne Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 12:39:22 +0000 UTC }] Jan 14 12:39:45.678: INFO: Jan 14 12:39:45.678: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 14 12:39:46.686: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.98028273s Jan 14 12:39:47.692: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.971324337s Jan 14 12:39:48.703: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.964837349s Jan 14 12:39:49.711: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.954388674s Jan 14 12:39:50.718: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.946501422s Jan 14 12:39:51.724: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.939023047s Jan 14 12:39:52.730: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.933566424s Jan 14 12:39:53.736: INFO: Verifying statefulset ss doesn't scale past 0 for another 927.00067ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6757 Jan 14 12:39:54.742: INFO: Scaling statefulset ss to 0 Jan 14 12:39:54.762: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 14 12:39:54.768: INFO: Deleting all statefulset in ns statefulset-6757 Jan 14 12:39:54.774: INFO: Scaling statefulset ss to 0 Jan 14 12:39:54.803: INFO: Waiting for statefulset status.replicas updated to 0 Jan 14 12:39:54.807: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:188 Jan 14 12:39:54.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6757" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:41.629: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 14 12:39:41.664: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:39:45.515: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jan 14 12:40:00.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-3294" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":9,"skipped":121,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:54.885: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 14 12:39:54.969: INFO: Waiting up to 5m0s for pod "pod-406c39a3-b443-446e-b2d9-7366b0367e49" in namespace "emptydir-196" to be "Succeeded or Failed" Jan 14 12:39:54.976: INFO: Pod "pod-406c39a3-b443-446e-b2d9-7366b0367e49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.983712ms Jan 14 12:39:56.983: INFO: Pod "pod-406c39a3-b443-446e-b2d9-7366b0367e49": Phase="Running", Reason="", readiness=true. Elapsed: 2.014022334s Jan 14 12:39:58.989: INFO: Pod "pod-406c39a3-b443-446e-b2d9-7366b0367e49": Phase="Running", Reason="", readiness=false. Elapsed: 4.020181734s Jan 14 12:40:01.001: INFO: Pod "pod-406c39a3-b443-446e-b2d9-7366b0367e49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032148522s �[1mSTEP�[0m: Saw pod success Jan 14 12:40:01.001: INFO: Pod "pod-406c39a3-b443-446e-b2d9-7366b0367e49" satisfied condition "Succeeded or Failed" Jan 14 12:40:01.005: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod pod-406c39a3-b443-446e-b2d9-7366b0367e49 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:40:01.037: INFO: Waiting for pod pod-406c39a3-b443-446e-b2d9-7366b0367e49 to disappear Jan 14 12:40:01.054: INFO: Pod pod-406c39a3-b443-446e-b2d9-7366b0367e49 no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 Jan 14 12:40:01.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-196" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:40:00.333: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 14 12:40:00.384: INFO: Waiting up to 5m0s for pod "downward-api-bf50147e-a623-406b-b358-bf4a7caa6dcb" in namespace "downward-api-6796" to be "Succeeded or Failed" Jan 14 12:40:00.392: INFO: Pod "downward-api-bf50147e-a623-406b-b358-bf4a7caa6dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462783ms Jan 14 12:40:02.410: INFO: Pod "downward-api-bf50147e-a623-406b-b358-bf4a7caa6dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025157015s Jan 14 12:40:04.894: INFO: Pod "downward-api-bf50147e-a623-406b-b358-bf4a7caa6dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508716705s Jan 14 12:40:06.980: INFO: Pod "downward-api-bf50147e-a623-406b-b358-bf4a7caa6dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5945183s Jan 14 12:40:09.018: INFO: Pod "downward-api-bf50147e-a623-406b-b358-bf4a7caa6dcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.633305976s �[1mSTEP�[0m: Saw pod success Jan 14 12:40:09.019: INFO: Pod "downward-api-bf50147e-a623-406b-b358-bf4a7caa6dcb" satisfied condition "Succeeded or Failed" Jan 14 12:40:09.079: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod downward-api-bf50147e-a623-406b-b358-bf4a7caa6dcb container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:40:09.295: INFO: Waiting for pod downward-api-bf50147e-a623-406b-b358-bf4a7caa6dcb to disappear Jan 14 12:40:09.361: INFO: Pod downward-api-bf50147e-a623-406b-b358-bf4a7caa6dcb no longer exists [AfterEach] [sig-node] Downward API test/e2e/framework/framework.go:188 Jan 14 12:40:09.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6796" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":148,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:38:04.554: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services test/e2e/network/service.go:758 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating service in namespace services-1572 Jan 14 12:38:04.594: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:38:06.598: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jan 14 12:38:06.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 14 12:38:07.057: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 14 12:38:07.057: INFO: stdout: "iptables" Jan 14 12:38:07.057: INFO: proxyMode: iptables Jan 14 12:38:07.065: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 14 12:38:07.068: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-nodeport-timeout in namespace services-1572 �[1mSTEP�[0m: creating replication controller affinity-nodeport-timeout in namespace services-1572 I0114 12:38:07.089475 21 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1572, replica count: 3 I0114 12:38:10.140889 21 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0114 12:38:13.141133 21 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 14 12:38:13.152: INFO: Creating new exec pod Jan 14 12:38:16.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:18.327: INFO: rc: 1 Jan 14 12:38:18.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + nc -v -t -w 2 affinity-nodeport-timeout 80 + echo hostName nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:19.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:21.493: INFO: rc: 1 Jan 14 12:38:21.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:22.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:24.480: INFO: rc: 1 Jan 14 12:38:24.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:25.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:27.479: INFO: rc: 1 Jan 14 12:38:27.479: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:28.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:30.485: INFO: rc: 1 Jan 14 12:38:30.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:31.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:33.500: INFO: rc: 1 Jan 14 12:38:33.500: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:34.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:36.455: INFO: rc: 1 Jan 14 12:38:36.456: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:37.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:39.495: INFO: rc: 1 Jan 14 12:38:39.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:40.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:42.519: INFO: rc: 1 Jan 14 12:38:42.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:43.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:45.481: INFO: rc: 1 Jan 14 12:38:45.482: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:46.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:48.501: INFO: rc: 1 Jan 14 12:38:48.501: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:49.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:51.480: INFO: rc: 1 Jan 14 12:38:51.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:52.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:54.484: INFO: rc: 1 Jan 14 12:38:54.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + nc -v -t -w 2 affinity-nodeport-timeout 80 + echo hostName nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:55.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:38:57.494: INFO: rc: 1 Jan 14 12:38:57.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:38:58.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:00.476: INFO: rc: 1 Jan 14 12:39:00.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:01.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:03.534: INFO: rc: 1 Jan 14 12:39:03.534: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:04.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:06.484: INFO: rc: 1 Jan 14 12:39:06.484: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:07.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:09.479: INFO: rc: 1 Jan 14 12:39:09.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:10.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:12.479: INFO: rc: 1 Jan 14 12:39:12.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:13.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:15.475: INFO: rc: 1 Jan 14 12:39:15.475: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:16.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:18.516: INFO: rc: 1 Jan 14 12:39:18.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:19.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:21.491: INFO: rc: 1 Jan 14 12:39:21.491: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:22.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:24.476: INFO: rc: 1 Jan 14 12:39:24.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:25.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:27.541: INFO: rc: 1 Jan 14 12:39:27.541: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + nc -v -t -w 2 affinity-nodeport-timeout 80 + echo hostName nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:28.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:30.664: INFO: rc: 1 Jan 14 12:39:30.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + nc -v -t -w 2 affinity-nodeport-timeout 80 + echo hostName nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:31.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:33.645: INFO: rc: 1 Jan 14 12:39:33.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:34.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:36.741: INFO: rc: 1 Jan 14 12:39:36.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:37.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:39.574: INFO: rc: 1 Jan 14 12:39:39.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + + ncecho hostName -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:40.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:42.693: INFO: rc: 1 Jan 14 12:39:42.693: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:43.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:45.609: INFO: rc: 1 Jan 14 12:39:45.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:46.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:48.640: INFO: rc: 1 Jan 14 12:39:48.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + nc -v -t -w 2 affinity-nodeport-timeout 80 + echo hostName nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:49.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:51.600: INFO: rc: 1 Jan 14 12:39:51.600: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + + echo hostNamenc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:52.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:54.648: INFO: rc: 1 Jan 14 12:39:54.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:55.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:39:57.653: INFO: rc: 1 Jan 14 12:39:57.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:39:58.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:40:00.628: INFO: rc: 1 Jan 14 12:40:00.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:40:01.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:40:04.049: INFO: rc: 1 Jan 14 12:40:04.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:40:04.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:40:08.290: INFO: rc: 1 Jan 14 12:40:08.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:40:08.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:40:13.199: INFO: rc: 1 Jan 14 12:40:13.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + nc -v -t -w 2 affinity-nodeport-timeout 80 + echo hostName nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:40:13.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:40:17.350: INFO: rc: 1 Jan 14 12:40:17.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + nc -v -t -w 2 affinity-nodeport-timeout 80 + echo hostName nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:40:18.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:40:22.304: INFO: rc: 1 Jan 14 12:40:22.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:40:22.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:40:25.903: INFO: rc: 1 Jan 14 12:40:25.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1572 exec execpod-affinityt6d77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + nc -v -t -w 2 affinity-nodeport-timeout 80 + echo hostName nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 14 12:40:25.903: FAIL: Unexpected error: <*errors.errorString | 0xc00366a530>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0010ba180, {0x7a36e58, 0xc002f8f080}, 0xc00066c000) test/e2e/network/service.go:3688 +0x7a8 k8s.io/kubernetes/test/e2e/network.glob..func25.28() test/e2e/network/service.go:2137 +0x8b k8s.io/kubernetes/test/e2e.RunE2ETests(0x25634d7?) test/e2e/e2e.go:130 +0x686 k8s.io/kubernetes/test/e2e.TestE2E(0x24d4cd9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000503d40, 0x73bdd00) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f Jan 14 12:40:25.904: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-timeout in namespace services-1572, will wait for the garbage collector to delete the pods Jan 14 12:40:26.240: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 85.094935ms Jan 14 12:40:26.840: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.431669ms [AfterEach] [sig-network] Services test/e2e/framework/framework.go:188 Jan 14 12:40:30.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1572" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 �[91m�[1m• Failure [146.233 seconds]�[0m [sig-network] Services �[90mtest/e2e/network/common/framework.go:23�[0m �[91m�[1mshould have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 14 12:40:25.904: Unexpected error: <*errors.errorString | 0xc00366a530>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol occurred�[0m test/e2e/network/service.go:3688 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:40:01.122: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Jan 14 12:40:41.546: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-ihjwwi-wgnbq-d4k98 is Running (Ready = true) Jan 14 12:40:41.675: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 14 12:40:41.675: INFO: Deleting pod "simpletest.rc-25sz7" in namespace "gc-2381" Jan 14 12:40:41.691: INFO: Deleting pod "simpletest.rc-2ccpk" in namespace "gc-2381" Jan 14 12:40:41.711: INFO: Deleting pod "simpletest.rc-2rppl" in namespace "gc-2381" Jan 14 12:40:41.745: INFO: Deleting pod "simpletest.rc-2zqf4" in namespace "gc-2381" Jan 14 12:40:41.773: INFO: Deleting pod "simpletest.rc-48tfs" in namespace "gc-2381" Jan 14 12:40:41.796: INFO: Deleting pod "simpletest.rc-4pmxs" in namespace "gc-2381" Jan 14 12:40:41.842: INFO: Deleting pod "simpletest.rc-52xpf" in namespace "gc-2381" Jan 14 12:40:41.892: INFO: Deleting pod "simpletest.rc-5dmhg" in namespace "gc-2381" Jan 14 12:40:41.930: INFO: Deleting pod "simpletest.rc-5pq5s" in namespace "gc-2381" Jan 14 12:40:41.976: INFO: Deleting pod "simpletest.rc-5rcdl" in namespace "gc-2381" Jan 14 12:40:42.058: INFO: Deleting pod "simpletest.rc-5zl5m" in namespace "gc-2381" Jan 14 12:40:42.093: INFO: Deleting pod "simpletest.rc-6cfbc" in namespace "gc-2381" Jan 14 12:40:42.128: INFO: Deleting pod "simpletest.rc-6dqwq" in namespace "gc-2381" Jan 14 12:40:42.196: INFO: Deleting pod "simpletest.rc-6fvx6" in namespace "gc-2381" Jan 14 12:40:42.263: INFO: Deleting pod "simpletest.rc-6kfpg" in namespace "gc-2381" Jan 14 12:40:42.328: INFO: Deleting pod "simpletest.rc-74src" in namespace "gc-2381" Jan 14 12:40:42.375: INFO: Deleting pod "simpletest.rc-76qm6" in namespace "gc-2381" Jan 14 12:40:42.452: INFO: Deleting pod "simpletest.rc-7xk5x" in namespace "gc-2381" Jan 14 12:40:42.519: INFO: Deleting pod "simpletest.rc-7zghf" in namespace "gc-2381" Jan 14 12:40:42.547: INFO: Deleting pod "simpletest.rc-846s9" in namespace "gc-2381" Jan 14 12:40:42.588: INFO: Deleting pod "simpletest.rc-9cj5j" in namespace "gc-2381" Jan 14 12:40:42.645: INFO: Deleting pod "simpletest.rc-9wgvn" in namespace "gc-2381" Jan 14 12:40:42.710: INFO: Deleting pod "simpletest.rc-c4lvz" in namespace "gc-2381" Jan 14 12:40:42.800: INFO: Deleting pod "simpletest.rc-c66rt" in namespace "gc-2381" Jan 14 12:40:42.879: INFO: Deleting pod "simpletest.rc-c6v22" in namespace "gc-2381" Jan 14 12:40:42.924: INFO: Deleting pod "simpletest.rc-c6vq8" in namespace "gc-2381" Jan 14 12:40:42.991: INFO: Deleting pod "simpletest.rc-c94xj" in namespace "gc-2381" Jan 14 12:40:43.095: INFO: Deleting pod "simpletest.rc-cggp9" in namespace "gc-2381" Jan 14 12:40:43.174: INFO: Deleting pod "simpletest.rc-cl9h4" in namespace "gc-2381" Jan 14 12:40:43.268: INFO: Deleting pod "simpletest.rc-cz8xp" in namespace "gc-2381" Jan 14 12:40:43.335: INFO: Deleting pod "simpletest.rc-czjvq" in namespace "gc-2381" Jan 14 12:40:43.377: INFO: Deleting pod "simpletest.rc-d7gjk" in namespace "gc-2381" Jan 14 12:40:43.471: INFO: Deleting pod "simpletest.rc-d9qc6" in namespace "gc-2381" Jan 14 12:40:43.521: INFO: Deleting pod "simpletest.rc-dblcx" in namespace "gc-2381" Jan 14 12:40:43.595: INFO: Deleting pod "simpletest.rc-dcdh7" in namespace "gc-2381" Jan 14 12:40:43.677: INFO: Deleting pod "simpletest.rc-dcpt8" in namespace "gc-2381" Jan 14 12:40:43.753: INFO: Deleting pod "simpletest.rc-dg8fm" in namespace "gc-2381" Jan 14 12:40:43.822: INFO: Deleting pod "simpletest.rc-dj6rs" in namespace "gc-2381" Jan 14 12:40:43.880: INFO: Deleting pod "simpletest.rc-drrxs" in namespace "gc-2381" Jan 14 12:40:43.932: INFO: Deleting pod "simpletest.rc-f47vb" in namespace "gc-2381" Jan 14 12:40:44.024: INFO: Deleting pod "simpletest.rc-f8fj4" in namespace "gc-2381" Jan 14 12:40:44.149: INFO: Deleting pod "simpletest.rc-fb7g2" in namespace "gc-2381" Jan 14 12:40:44.189: INFO: Deleting pod "simpletest.rc-fcp25" in namespace "gc-2381" Jan 14 12:40:44.276: INFO: Deleting pod "simpletest.rc-fmsbn" in namespace "gc-2381" Jan 14 12:40:44.364: INFO: Deleting pod "simpletest.rc-fs7gp" in namespace "gc-2381" Jan 14 12:40:44.435: INFO: Deleting pod "simpletest.rc-gjtcv" in namespace "gc-2381" Jan 14 12:40:44.454: INFO: Deleting pod "simpletest.rc-gwr4p" in namespace "gc-2381" Jan 14 12:40:44.538: INFO: Deleting pod "simpletest.rc-h6qps" in namespace "gc-2381" Jan 14 12:40:44.637: INFO: Deleting pod "simpletest.rc-hcjvj" in namespace "gc-2381" Jan 14 12:40:44.694: INFO: Deleting pod "simpletest.rc-hfjj5" in namespace "gc-2381" Jan 14 12:40:44.734: INFO: Deleting pod "simpletest.rc-hr786" in namespace "gc-2381" Jan 14 12:40:44.790: INFO: Deleting pod "simpletest.rc-hxv8w" in namespace "gc-2381" Jan 14 12:40:44.869: INFO: Deleting pod "simpletest.rc-j4jcz" in namespace "gc-2381" Jan 14 12:40:44.974: INFO: Deleting pod "simpletest.rc-j5crg" in namespace "gc-2381" Jan 14 12:40:45.032: INFO: Deleting pod "simpletest.rc-j5xsj" in namespace "gc-2381" Jan 14 12:40:45.165: INFO: Deleting pod "simpletest.rc-jbzn2" in namespace "gc-2381" Jan 14 12:40:45.204: INFO: Deleting pod "simpletest.rc-jjnxn" in namespace "gc-2381" Jan 14 12:40:45.274: INFO: Deleting pod "simpletest.rc-jtnfm" in namespace "gc-2381" Jan 14 12:40:45.340: INFO: Deleting pod "simpletest.rc-jtw9j" in namespace "gc-2381" Jan 14 12:40:45.386: INFO: Deleting pod "simpletest.rc-k644b" in namespace "gc-2381" Jan 14 12:40:45.436: INFO: Deleting pod "simpletest.rc-kbr8p" in namespace "gc-2381" Jan 14 12:40:45.486: INFO: Deleting pod "simpletest.rc-kktww" in namespace "gc-2381" Jan 14 12:40:45.615: INFO: Deleting pod "simpletest.rc-klwwl" in namespace "gc-2381" Jan 14 12:40:45.652: INFO: Deleting pod "simpletest.rc-lczl5" in namespace "gc-2381" Jan 14 12:40:45.697: INFO: Deleting pod "simpletest.rc-ld96t" in namespace "gc-2381" Jan 14 12:40:45.726: INFO: Deleting pod "simpletest.rc-lj4k2" in namespace "gc-2381" Jan 14 12:40:45.832: INFO: Deleting pod "simpletest.rc-ln82x" in namespace "gc-2381" Jan 14 12:40:45.914: INFO: Deleting pod "simpletest.rc-mndl9" in namespace "gc-2381" Jan 14 12:40:45.960: INFO: Deleting pod "simpletest.rc-mr6sv" in namespace "gc-2381" Jan 14 12:40:46.064: INFO: Deleting pod "simpletest.rc-n99lv" in namespace "gc-2381" Jan 14 12:40:46.111: INFO: Deleting pod "simpletest.rc-ng6lz" in namespace "gc-2381" Jan 14 12:40:46.171: INFO: Deleting pod "simpletest.rc-nn9bh" in namespace "gc-2381" Jan 14 12:40:46.201: INFO: Deleting pod "simpletest.rc-pd8gw" in namespace "gc-2381" Jan 14 12:40:46.304: INFO: Deleting pod "simpletest.rc-pjplt" in namespace "gc-2381" Jan 14 12:40:46.398: INFO: Deleting pod "simpletest.rc-px7zg" in namespace "gc-2381" Jan 14 12:40:46.440: INFO: Deleting pod "simpletest.rc-qdbgx" in namespace "gc-2381" Jan 14 12:40:46.542: INFO: Deleting pod "simpletest.rc-r2tk5" in namespace "gc-2381" Jan 14 12:40:46.603: INFO: Deleting pod "simpletest.rc-r8v6m" in namespace "gc-2381" Jan 14 12:40:46.640: INFO: Deleting pod "simpletest.rc-rkdgf" in namespace "gc-2381" Jan 14 12:40:46.673: INFO: Deleting pod "simpletest.rc-s22mw" in namespace "gc-2381" Jan 14 12:40:46.723: INFO: Deleting pod "simpletest.rc-sc4p7" in namespace "gc-2381" Jan 14 12:40:46.812: INFO: Deleting pod "simpletest.rc-shq44" in namespace "gc-2381" Jan 14 12:40:46.878: INFO: Deleting pod "simpletest.rc-vczjl" in namespace "gc-2381" Jan 14 12:40:46.941: INFO: Deleting pod "simpletest.rc-vd57p" in namespace "gc-2381" Jan 14 12:40:46.997: INFO: Deleting pod "simpletest.rc-vgwlj" in namespace "gc-2381" Jan 14 12:40:47.047: INFO: Deleting pod "simpletest.rc-vm4w2" in namespace "gc-2381" Jan 14 12:40:47.084: INFO: Deleting pod "simpletest.rc-vtqg6" in namespace "gc-2381" Jan 14 12:40:47.117: INFO: Deleting pod "simpletest.rc-wjw96" in namespace "gc-2381" Jan 14 12:40:47.166: INFO: Deleting pod "simpletest.rc-wlglx" in namespace "gc-2381" Jan 14 12:40:47.255: INFO: Deleting pod "simpletest.rc-wlkf8" in namespace "gc-2381" Jan 14 12:40:47.282: INFO: Deleting pod "simpletest.rc-x4ln4" in namespace "gc-2381" Jan 14 12:40:47.344: INFO: Deleting pod "simpletest.rc-x52wr" in namespace "gc-2381" Jan 14 12:40:47.393: INFO: Deleting pod "simpletest.rc-xj7tv" in namespace "gc-2381" Jan 14 12:40:47.412: INFO: Deleting pod "simpletest.rc-xlqwx" in namespace "gc-2381" Jan 14 12:40:47.483: INFO: Deleting pod "simpletest.rc-xsx7r" in namespace "gc-2381" Jan 14 12:40:47.557: INFO: Deleting pod "simpletest.rc-zfr9s" in namespace "gc-2381" Jan 14 12:40:47.605: INFO: Deleting pod "simpletest.rc-zmlxq" in namespace "gc-2381" Jan 14 12:40:47.653: INFO: Deleting pod "simpletest.rc-zpl55" in namespace "gc-2381" Jan 14 12:40:47.718: INFO: Deleting pod "simpletest.rc-zxk7f" in namespace "gc-2381" Jan 14 12:40:47.780: INFO: Deleting pod "simpletest.rc-zxlxg" in namespace "gc-2381" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 14 12:40:47.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-2381" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":4,"skipped":30,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} [BeforeEach] [sig-node] Container Runtime test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:40:47.952: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 14 12:40:58.281: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime test/e2e/framework/framework.go:188 Jan 14 12:40:58.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-788" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":30,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:40:58.383: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide container's cpu request [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 14 12:40:58.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6356811-707e-411f-be41-fe04d759464a" in namespace "downward-api-2905" to be "Succeeded or Failed" Jan 14 12:40:58.466: INFO: Pod "downwardapi-volume-c6356811-707e-411f-be41-fe04d759464a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.447289ms Jan 14 12:41:00.472: INFO: Pod "downwardapi-volume-c6356811-707e-411f-be41-fe04d759464a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011322044s Jan 14 12:41:02.479: INFO: Pod "downwardapi-volume-c6356811-707e-411f-be41-fe04d759464a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018714504s �[1mSTEP�[0m: Saw pod success Jan 14 12:41:02.479: INFO: Pod "downwardapi-volume-c6356811-707e-411f-be41-fe04d759464a" satisfied condition "Succeeded or Failed" Jan 14 12:41:02.486: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk pod downwardapi-volume-c6356811-707e-411f-be41-fe04d759464a container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:41:02.518: INFO: Waiting for pod downwardapi-volume-c6356811-707e-411f-be41-fe04d759464a to disappear Jan 14 12:41:02.523: INFO: Pod downwardapi-volume-c6356811-707e-411f-be41-fe04d759464a no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 Jan 14 12:41:02.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2905" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":46,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:41:02.581: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename runtimeclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [AfterEach] [sig-node] RuntimeClass test/e2e/framework/framework.go:188 Jan 14 12:41:02.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "runtimeclass-9360" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":63,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:41:02.726: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment test/e2e/apps/deployment.go:91 [It] should run the lifecycle of a Deployment [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating a Deployment �[1mSTEP�[0m: waiting for Deployment to be created �[1mSTEP�[0m: waiting for all Replicas to be Ready Jan 14 12:41:02.775: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 12:41:02.775: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 12:41:02.790: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 12:41:02.790: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 12:41:02.823: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 12:41:02.823: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 12:41:02.847: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 12:41:02.847: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 12:41:04.597: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 14 12:41:04.597: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 14 12:41:04.928: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 and labels map[test-deployment-static:true] �[1mSTEP�[0m: patching the Deployment Jan 14 12:41:04.947: INFO: observed event type ADDED �[1mSTEP�[0m: waiting for Replicas to scale Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 0 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:04.952: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:04.979: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:04.980: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:05.018: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:05.018: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:05.046: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 Jan 14 12:41:05.046: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 Jan 14 12:41:07.640: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:07.640: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:07.705: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 �[1mSTEP�[0m: listing Deployments Jan 14 12:41:07.710: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] �[1mSTEP�[0m: updating the Deployment Jan 14 12:41:07.728: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 �[1mSTEP�[0m: fetching the DeploymentStatus Jan 14 12:41:07.748: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 12:41:07.749: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 12:41:07.776: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 12:41:07.832: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 12:41:07.870: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 12:41:09.211: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 12:41:09.243: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 12:41:09.256: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 12:41:09.267: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 12:41:09.285: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 12:41:10.993: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] �[1mSTEP�[0m: patching the DeploymentStatus �[1mSTEP�[0m: fetching the DeploymentStatus Jan 14 12:41:11.063: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 Jan 14 12:41:11.064: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 Jan 14 12:41:11.064: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 Jan 14 12:41:11.064: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 Jan 14 12:41:11.064: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 1 Jan 14 12:41:11.064: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:11.064: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:11.064: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:11.064: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:11.064: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 2 Jan 14 12:41:11.065: INFO: observed Deployment test-deployment in namespace deployment-6825 with ReadyReplicas 3 �[1mSTEP�[0m: deleting the Deployment Jan 14 12:41:11.078: INFO: observed event type MODIFIED Jan 14 12:41:11.079: INFO: observed event type MODIFIED Jan 14 12:41:11.079: INFO: observed event type MODIFIED Jan 14 12:41:11.079: INFO: observed event type MODIFIED Jan 14 12:41:11.079: INFO: observed event type MODIFIED Jan 14 12:41:11.080: INFO: observed event type MODIFIED Jan 14 12:41:11.080: INFO: observed event type MODIFIED Jan 14 12:41:11.082: INFO: observed event type MODIFIED Jan 14 12:41:11.083: INFO: observed event type MODIFIED Jan 14 12:41:11.083: INFO: observed event type MODIFIED Jan 14 12:41:11.083: INFO: observed event type MODIFIED Jan 14 12:41:11.083: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment test/e2e/apps/deployment.go:84 Jan 14 12:41:11.090: INFO: Log out all the ReplicaSets if there is no deployment created Jan 14 12:41:11.105: INFO: ReplicaSet "test-deployment-6b48c869b6": &ReplicaSet{ObjectMeta:{test-deployment-6b48c869b6 deployment-6825 ae53d777-c89e-4e13-a423-fbe6e03a899b 6191 3 2023-01-14 12:41:02 +0000 UTC <nil> <nil> map[pod-template-hash:6b48c869b6 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment bf0b9356-7995-4e0a-93ae-ad7cd8873917 0xc003742787 0xc003742788}] [] [{kube-controller-manager Update apps/v1 2023-01-14 12:41:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf0b9356-7995-4e0a-93ae-ad7cd8873917\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:41:07 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 6b48c869b6,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:6b48c869b6 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003742810 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 14 12:41:11.118: INFO: ReplicaSet "test-deployment-74c6dd549b": &ReplicaSet{ObjectMeta:{test-deployment-74c6dd549b deployment-6825 9953a8d6-5b4b-4b43-8bf9-22a666197777 6280 2 2023-01-14 12:41:07 +0000 UTC <nil> <nil> map[pod-template-hash:74c6dd549b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment bf0b9356-7995-4e0a-93ae-ad7cd8873917 0xc003742877 0xc003742878}] [] [{kube-controller-manager Update apps/v1 2023-01-14 12:41:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf0b9356-7995-4e0a-93ae-ad7cd8873917\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:41:09 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 74c6dd549b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:74c6dd549b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003742900 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Jan 14 12:41:11.130: INFO: pod: "test-deployment-74c6dd549b-q7pvv": &Pod{ObjectMeta:{test-deployment-74c6dd549b-q7pvv test-deployment-74c6dd549b- deployment-6825 4578ab2b-1b06-4636-b0fe-aca047476ddc 6244 0 2023-01-14 12:41:07 +0000 UTC <nil> <nil> map[pod-template-hash:74c6dd549b test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-74c6dd549b 9953a8d6-5b4b-4b43-8bf9-22a666197777 0xc0031ffd27 0xc0031ffd28}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9953a8d6-5b4b-4b43-8bf9-22a666197777\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qhsjn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhsjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.41,StartTime:2023-01-14 12:41:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://adb53fff9ba612e829c0f958609302937e9f7425ca4a6a2c4f575c405ac8ebab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:11.131: INFO: pod: "test-deployment-74c6dd549b-qzxzm": &Pod{ObjectMeta:{test-deployment-74c6dd549b-qzxzm test-deployment-74c6dd549b- deployment-6825 8bd86e49-1738-43f9-8d5f-03aab763969d 6279 0 2023-01-14 12:41:09 +0000 UTC <nil> <nil> map[pod-template-hash:74c6dd549b test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-74c6dd549b 9953a8d6-5b4b-4b43-8bf9-22a666197777 0xc0031fff07 0xc0031fff08}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9953a8d6-5b4b-4b43-8bf9-22a666197777\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-prqp4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-prqp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-g557ne,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.40,StartTime:2023-01-14 12:41:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://998509c009526f3ca1256a233399e0810684c936a2ccd7cc284be41e94d7fb8e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:11.131: INFO: ReplicaSet "test-deployment-84b949bdfc": &ReplicaSet{ObjectMeta:{test-deployment-84b949bdfc deployment-6825 c50a9e25-f73e-4b85-869c-f2093bc16007 6288 4 2023-01-14 12:41:04 +0000 UTC <nil> <nil> map[pod-template-hash:84b949bdfc test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment bf0b9356-7995-4e0a-93ae-ad7cd8873917 0xc003742967 0xc003742968}] [] [{kube-controller-manager Update apps/v1 2023-01-14 12:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf0b9356-7995-4e0a-93ae-ad7cd8873917\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 84b949bdfc,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:84b949bdfc test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.7 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037429f0 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 14 12:41:11.141: INFO: pod: "test-deployment-84b949bdfc-r4m72": &Pod{ObjectMeta:{test-deployment-84b949bdfc-r4m72 test-deployment-84b949bdfc- deployment-6825 81d6378f-97ea-4837-959b-68dca0e995d7 6284 0 2023-01-14 12:41:04 +0000 UTC 2023-01-14 12:41:11 +0000 UTC 0xc003aeb260 map[pod-template-hash:84b949bdfc test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-84b949bdfc c50a9e25-f73e-4b85-869c-f2093bc16007 0xc003aeb297 0xc003aeb298}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c50a9e25-f73e-4b85-869c-f2093bc16007\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.38\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lhqnd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhqnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.38,StartTime:2023-01-14 12:41:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.7,ImageID:k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c,ContainerID:containerd://b17e2b816e7ba628c5c095c735aa89c7c04ae37d1da7f77967e682bcb5f8171c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment test/e2e/framework/framework.go:188 Jan 14 12:41:11.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-6825" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":8,"skipped":99,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:41:11.183: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment test/e2e/apps/deployment.go:91 [It] deployment should support proportional scaling [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:41:11.271: INFO: Creating deployment "webserver-deployment" Jan 14 12:41:11.282: INFO: Waiting for observed generation 1 Jan 14 12:41:13.310: INFO: Waiting for all required pods to come up Jan 14 12:41:13.333: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Jan 14 12:41:15.421: INFO: Waiting for deployment "webserver-deployment" to complete Jan 14 12:41:15.433: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 14 12:41:15.448: INFO: Updating deployment webserver-deployment Jan 14 12:41:15.448: INFO: Waiting for observed generation 2 Jan 14 12:41:17.460: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 14 12:41:17.465: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 14 12:41:17.469: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 14 12:41:17.485: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 14 12:41:17.485: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 14 12:41:17.491: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 14 12:41:17.500: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 14 12:41:17.500: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 14 12:41:17.516: INFO: Updating deployment webserver-deployment Jan 14 12:41:17.516: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 14 12:41:17.536: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 14 12:41:17.545: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment test/e2e/apps/deployment.go:84 Jan 14 12:41:19.605: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7497 ba1cbc34-540d-44fc-8d86-ef3cc9c9a4c9 6656 3 2023-01-14 12:41:11 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:41:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00401a7f8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-14 12:41:17 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-57ccb67bb8" is progressing.,LastUpdateTime:2023-01-14 12:41:18 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 14 12:41:19.629: INFO: New ReplicaSet "webserver-deployment-57ccb67bb8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-57ccb67bb8 deployment-7497 7aa513fe-19ff-4064-9a51-ce3227a43e3f 6652 3 2023-01-14 12:41:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ba1cbc34-540d-44fc-8d86-ef3cc9c9a4c9 0xc002ff72b7 0xc002ff72b8}] [] [{kube-controller-manager Update apps/v1 2023-01-14 12:41:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1cbc34-540d-44fc-8d86-ef3cc9c9a4c9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:41:15 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 57ccb67bb8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ff7368 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 14 12:41:19.629: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 14 12:41:19.630: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-55df494869 deployment-7497 55c7e84b-00a3-40cc-91af-100071060c43 6635 3 2023-01-14 12:41:11 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ba1cbc34-540d-44fc-8d86-ef3cc9c9a4c9 0xc002ff71c7 0xc002ff71c8}] [] [{kube-controller-manager Update apps/v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1cbc34-540d-44fc-8d86-ef3cc9c9a4c9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:41:13 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 55df494869,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ff7258 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 14 12:41:19.660: INFO: Pod "webserver-deployment-55df494869-5wvhz" is available: &Pod{ObjectMeta:{webserver-deployment-55df494869-5wvhz webserver-deployment-55df494869- deployment-7497 4b78ef86-94fd-4e70-b7f1-9a050102eee0 6434 0 2023-01-14 12:41:11 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401ac00 0xc00401ac01}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qxwrg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxwrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.41,StartTime:2023-01-14 12:41:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://df2bc3ba7b7caa65c800092a1c4752bc6b5199c4c241e9fa4cf99e590d1d2b65,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.661: INFO: Pod "webserver-deployment-55df494869-7n7kl" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-7n7kl webserver-deployment-55df494869- deployment-7497 793c0caa-3f62-4ee8-a0c3-a34681cb3053 6653 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401add0 0xc00401add1}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jp4mp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp4mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.661: INFO: Pod "webserver-deployment-55df494869-8gkp8" is available: &Pod{ObjectMeta:{webserver-deployment-55df494869-8gkp8 webserver-deployment-55df494869- deployment-7497 4b8460de-437d-4e4a-b4a9-c52e64e2aaf9 6423 0 2023-01-14 12:41:11 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401af80 0xc00401af81}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p8hjm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p8hjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.43,StartTime:2023-01-14 12:41:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://5bd0bdad6d3cd7f70d52b6b796fc62240a971f137d2b1b48187ad509087de8dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.661: INFO: Pod "webserver-deployment-55df494869-9np2r" is available: &Pod{ObjectMeta:{webserver-deployment-55df494869-9np2r webserver-deployment-55df494869- deployment-7497 bbe8ca45-3523-4e9b-9583-948fdfa25c33 6401 0 2023-01-14 12:41:11 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401b150 0xc00401b151}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.46\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mlvnr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mlvnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.46,StartTime:2023-01-14 12:41:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://450b6657f001fb8d83cd2a52a0c7a7a52a8b002f76acbe06c0086dc538c17210,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.662: INFO: Pod "webserver-deployment-55df494869-bhg52" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-bhg52 webserver-deployment-55df494869- deployment-7497 722622ec-9542-4a6d-9a2a-c8432d5bef9d 6660 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401b320 0xc00401b321}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4pjhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4pjhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-g557ne,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.663: INFO: Pod "webserver-deployment-55df494869-ddf6v" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-ddf6v webserver-deployment-55df494869- deployment-7497 329147a8-5926-4096-8bac-3ac6339e740a 6629 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401b4d0 0xc00401b4d1}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qmfq7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmfq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-g557ne,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.663: INFO: Pod "webserver-deployment-55df494869-fkq4c" is available: &Pod{ObjectMeta:{webserver-deployment-55df494869-fkq4c webserver-deployment-55df494869- deployment-7497 4a3b6327-ef7a-4b53-9e74-2796b5b359b7 6420 0 2023-01-14 12:41:11 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401b680 0xc00401b681}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-l7zsc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l7zsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-g557ne,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.41,StartTime:2023-01-14 12:41:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://14304b380f5330c232a3197c2bf24b3b78d770768b54cd1d5e02742d1ad950ca,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.664: INFO: Pod "webserver-deployment-55df494869-hzq2w" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-hzq2w webserver-deployment-55df494869- deployment-7497 06d6ce1b-405e-40ee-b5a0-5c196ad3d43e 6596 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401b850 0xc00401b851}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cnvvg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cnvvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-g557ne,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.664: INFO: Pod "webserver-deployment-55df494869-k2wh2" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-k2wh2 webserver-deployment-55df494869- deployment-7497 f88f5125-750b-4673-9c01-8027d8a476ef 6651 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401ba00 0xc00401ba01}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-79bmt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-79bmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.664: INFO: Pod "webserver-deployment-55df494869-kz885" is available: &Pod{ObjectMeta:{webserver-deployment-55df494869-kz885 webserver-deployment-55df494869- deployment-7497 4c97f335-9841-4022-9bc4-61cd91cfde7a 6404 0 2023-01-14 12:41:11 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401bbb0 0xc00401bbb1}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.45\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-plq8r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-plq8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.45,StartTime:2023-01-14 12:41:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0909557d0aae8c5011ba1e66335dedf4460247c1da1fb465489e24e668d25106,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.665: INFO: Pod "webserver-deployment-55df494869-l45sp" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-l45sp webserver-deployment-55df494869- deployment-7497 8c6413b1-1349-448f-8afc-f7419f23a322 6642 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401bd80 0xc00401bd81}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-whkhz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-whkhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-g557ne,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.665: INFO: Pod "webserver-deployment-55df494869-ngk46" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-ngk46 webserver-deployment-55df494869- deployment-7497 1b3c3f36-3d5e-412d-89ff-890f803a052d 6630 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc00401bf30 0xc00401bf31}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fv5zc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fv5zc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.668: INFO: Pod "webserver-deployment-55df494869-p67wc" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-p67wc webserver-deployment-55df494869- deployment-7497 9cd06f5e-2b0c-43db-a578-6be54bff6d86 6661 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc003f9c0e0 0xc003f9c0e1}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cwlxm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwlxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.669: INFO: Pod "webserver-deployment-55df494869-psfxh" is available: &Pod{ObjectMeta:{webserver-deployment-55df494869-psfxh webserver-deployment-55df494869- deployment-7497 c20ba2ec-7e69-46e2-b5d7-0397bda38ba7 6432 0 2023-01-14 12:41:11 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc003f9c290 0xc003f9c291}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-57w9v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-57w9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.40,StartTime:2023-01-14 12:41:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://2b93079e00eac6527ed22c08a4de3510132dbd96da4494b50f86d034f45e365a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.670: INFO: Pod "webserver-deployment-55df494869-s9wn4" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-s9wn4 webserver-deployment-55df494869- deployment-7497 b6b19a2e-536b-471a-9eae-32cdcc3349f4 6608 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc003f9c460 0xc003f9c461}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nm4fr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nm4fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.670: INFO: Pod "webserver-deployment-55df494869-w64mc" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-w64mc webserver-deployment-55df494869- deployment-7497 ea23be1a-8fcb-43cb-b4e2-20de2df8212b 6631 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc003f9c610 0xc003f9c611}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pv54k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pv54k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.670: INFO: Pod "webserver-deployment-55df494869-wg685" is available: &Pod{ObjectMeta:{webserver-deployment-55df494869-wg685 webserver-deployment-55df494869- deployment-7497 fdbcc435-0831-4a9e-aeda-195ddf8c7573 6408 0 2023-01-14 12:41:11 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc003f9c7c0 0xc003f9c7c1}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z6867,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6867,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.39,StartTime:2023-01-14 12:41:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://7b57f25c94ee1abc2c1ad0ea3b6da6403488e4918c74e0160efdd6fa88828f72,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.670: INFO: Pod "webserver-deployment-55df494869-wt7zk" is available: &Pod{ObjectMeta:{webserver-deployment-55df494869-wt7zk webserver-deployment-55df494869- deployment-7497 cbe6fd83-f936-4174-ab2d-babae7c43d7a 6425 0 2023-01-14 12:41:11 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc003f9c990 0xc003f9c991}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h88xw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h88xw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.42,StartTime:2023-01-14 12:41:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:41:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0f9dbd4675677456452c63b7db36f65ee07f1be852b33c9a1c076add60aa2e78,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.671: INFO: Pod "webserver-deployment-55df494869-wxqjn" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-wxqjn webserver-deployment-55df494869- deployment-7497 8fadb357-d007-4c86-92b2-38a7f423c4b4 6622 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc003f9cb60 0xc003f9cb61}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4xvml,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4xvml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.671: INFO: Pod "webserver-deployment-55df494869-xhcxr" is not available: &Pod{ObjectMeta:{webserver-deployment-55df494869-xhcxr webserver-deployment-55df494869- deployment-7497 6314ec1e-1a28-4447-a6c0-c82e659537ac 6692 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 55c7e84b-00a3-40cc-91af-100071060c43 0xc003f9cd10 0xc003f9cd11}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c7e84b-00a3-40cc-91af-100071060c43\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hvpbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hvpbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.672: INFO: Pod "webserver-deployment-57ccb67bb8-6z8gl" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-6z8gl webserver-deployment-57ccb67bb8- deployment-7497 32e56397-e52b-4a6a-9708-22b9f4aad0a8 6572 0 2023-01-14 12:41:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003f9cec0 0xc003f9cec1}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5bkgp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5bkgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.44,StartTime:2023-01-14 12:41:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.672: INFO: Pod "webserver-deployment-57ccb67bb8-9hvl7" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-9hvl7 webserver-deployment-57ccb67bb8- deployment-7497 94402413-7eb5-4354-9b0e-353ca0698b61 6566 0 2023-01-14 12:41:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003f9d0c0 0xc003f9d0c1}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rnzqb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnzqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-g557ne,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.44,StartTime:2023-01-14 12:41:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.679: INFO: Pod "webserver-deployment-57ccb67bb8-f9vd5" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-f9vd5 webserver-deployment-57ccb67bb8- deployment-7497 51775f3a-de5f-4d8f-9d0f-262b32a8adb8 6680 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003f9d2c0 0xc003f9d2c1}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qkghw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qkghw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.680: INFO: Pod "webserver-deployment-57ccb67bb8-hmnl6" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-hmnl6 webserver-deployment-57ccb67bb8- deployment-7497 bffd71f4-97cf-404f-bc48-23886996250d 6602 0 2023-01-14 12:41:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003f9d490 0xc003f9d491}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mk2bb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mk2bb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.48,StartTime:2023-01-14 12:41:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.681: INFO: Pod "webserver-deployment-57ccb67bb8-lct9t" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-lct9t webserver-deployment-57ccb67bb8- deployment-7497 4ed78c52-ed3c-4a63-9d40-a2829e60019f 6641 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003f9d690 0xc003f9d691}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-948cs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-948cs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.682: INFO: Pod "webserver-deployment-57ccb67bb8-pp8c4" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-pp8c4 webserver-deployment-57ccb67bb8- deployment-7497 f5c56c73-f35d-4064-88d6-e4ee100d54f3 6563 0 2023-01-14 12:41:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003f9d890 0xc003f9d891}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8sm9g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8sm9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-g557ne,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.43,StartTime:2023-01-14 12:41:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.683: INFO: Pod "webserver-deployment-57ccb67bb8-sn7hv" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-sn7hv webserver-deployment-57ccb67bb8- deployment-7497 44f56853-4344-4fcb-80f2-1ee062ebccb4 6689 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003f9da90 0xc003f9da91}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jfbld,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfbld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-14 12:41:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.683: INFO: Pod "webserver-deployment-57ccb67bb8-svcmh" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-svcmh webserver-deployment-57ccb67bb8- deployment-7497 331500cc-59dd-4194-8a16-0e4e50b59540 6528 0 2023-01-14 12:41:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003f9dc60 0xc003f9dc61}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q6bgn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6bgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.42,StartTime:2023-01-14 12:41:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.684: INFO: Pod "webserver-deployment-57ccb67bb8-v9wx2" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-v9wx2 webserver-deployment-57ccb67bb8- deployment-7497 cb4d117f-5e1f-4a21-a590-76803ab21f99 6697 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003f9de60 0xc003f9de61}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n4t4z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4t4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.684: INFO: Pod "webserver-deployment-57ccb67bb8-vjlkk" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-vjlkk webserver-deployment-57ccb67bb8- deployment-7497 1facf60a-44b3-481e-bcae-35e65c2b52c0 6691 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003608030 0xc003608031}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gkt2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gkt2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-g557ne,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.691: INFO: Pod "webserver-deployment-57ccb67bb8-w6cxp" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-w6cxp webserver-deployment-57ccb67bb8- deployment-7497 e242d134-53bc-4f13-8135-53efd7049dbb 6690 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc003608200 0xc003608201}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9cbtp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cbtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.691: INFO: Pod "webserver-deployment-57ccb67bb8-xlw6l" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-xlw6l webserver-deployment-57ccb67bb8- deployment-7497 5562f01b-8a0a-48b1-8b10-8e9f4f2dc7c1 6650 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc0036083e0 0xc0036083e1}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zkvq7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkvq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:41:19.692: INFO: Pod "webserver-deployment-57ccb67bb8-xspq2" is not available: &Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-xspq2 webserver-deployment-57ccb67bb8- deployment-7497 21f86726-c41c-40bb-bcfe-49826440e2ca 6687 0 2023-01-14 12:41:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 7aa513fe-19ff-4064-9a51-ce3227a43e3f 0xc0036085d0 0xc0036085d1}] [] [{kube-controller-manager Update v1 2023-01-14 12:41:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7aa513fe-19ff-4064-9a51-ce3227a43e3f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:41:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sf9w2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sf9w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:41:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-14 12:41:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment test/e2e/framework/framework.go:188 Jan 14 12:41:19.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-7497" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":9,"skipped":100,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:41:20.037: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jan 14 12:41:20.216: INFO: Waiting up to 5m0s for pod "security-context-ad089f28-b390-4d18-abb5-c05d87332080" in namespace "security-context-6075" to be "Succeeded or Failed" Jan 14 12:41:20.237: INFO: Pod "security-context-ad089f28-b390-4d18-abb5-c05d87332080": Phase="Pending", Reason="", readiness=false. Elapsed: 20.84858ms Jan 14 12:41:22.266: INFO: Pod "security-context-ad089f28-b390-4d18-abb5-c05d87332080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04991427s Jan 14 12:41:24.272: INFO: Pod "security-context-ad089f28-b390-4d18-abb5-c05d87332080": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056133528s Jan 14 12:41:26.334: INFO: Pod "security-context-ad089f28-b390-4d18-abb5-c05d87332080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.118289073s �[1mSTEP�[0m: Saw pod success Jan 14 12:41:26.334: INFO: Pod "security-context-ad089f28-b390-4d18-abb5-c05d87332080" satisfied condition "Succeeded or Failed" Jan 14 12:41:26.381: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk pod security-context-ad089f28-b390-4d18-abb5-c05d87332080 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:41:27.311: INFO: Waiting for pod security-context-ad089f28-b390-4d18-abb5-c05d87332080 to disappear Jan 14 12:41:27.369: INFO: Pod security-context-ad089f28-b390-4d18-abb5-c05d87332080 no longer exists [AfterEach] [sig-node] Security Context test/e2e/framework/framework.go:188 Jan 14 12:41:27.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-6075" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":146,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:41:27.682: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:245 [BeforeEach] Kubectl label test/e2e/kubectl/kubectl.go:1334 �[1mSTEP�[0m: creating the pod Jan 14 12:41:27.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6817 create -f -' Jan 14 12:41:29.179: INFO: stderr: "" Jan 14 12:41:29.180: INFO: stdout: "pod/pause created\n" Jan 14 12:41:29.180: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 14 12:41:29.180: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6817" to be "running and ready" Jan 14 12:41:29.197: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 17.26558ms Jan 14 12:41:31.203: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023085509s Jan 14 12:41:33.210: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.030671533s Jan 14 12:41:33.210: INFO: Pod "pause" satisfied condition "running and ready" Jan 14 12:41:33.210: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: adding the label testing-label with value testing-label-value to a pod Jan 14 12:41:33.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6817 label pods pause testing-label=testing-label-value' Jan 14 12:41:33.382: INFO: stderr: "" Jan 14 12:41:33.382: INFO: stdout: "pod/pause labeled\n" �[1mSTEP�[0m: verifying the pod has the label testing-label with the value testing-label-value Jan 14 12:41:33.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6817 get pod pause -L testing-label' Jan 14 12:41:33.527: INFO: stderr: "" Jan 14 12:41:33.527: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" �[1mSTEP�[0m: removing the label testing-label of a pod Jan 14 12:41:33.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6817 label pods pause testing-label-' Jan 14 12:41:33.679: INFO: stderr: "" Jan 14 12:41:33.679: INFO: stdout: "pod/pause unlabeled\n" �[1mSTEP�[0m: verifying the pod doesn't have the label testing-label Jan 14 12:41:33.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6817 get pod pause -L testing-label' Jan 14 12:41:33.805: INFO: stderr: "" Jan 14 12:41:33.805: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label test/e2e/kubectl/kubectl.go:1340 �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:41:33.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6817 delete --grace-period=0 --force -f -' Jan 14 12:41:33.963: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:41:33.964: INFO: stdout: "pod \"pause\" force deleted\n" Jan 14 12:41:33.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6817 get rc,svc -l name=pause --no-headers' Jan 14 12:41:34.107: INFO: stderr: "No resources found in kubectl-6817 namespace.\n" Jan 14 12:41:34.107: INFO: stdout: "" Jan 14 12:41:34.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6817 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 14 12:41:34.243: INFO: stderr: "" Jan 14 12:41:34.243: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:188 Jan 14 12:41:34.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-6817" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":11,"skipped":150,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:41:34.315: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating secret secrets-2873/secret-test-406c5e32-f155-4a0b-9826-631e6b76a64b �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 14 12:41:34.382: INFO: Waiting up to 5m0s for pod "pod-configmaps-4e3b2afe-f95f-4c36-8fee-2ed3c601c54c" in namespace "secrets-2873" to be "Succeeded or Failed" Jan 14 12:41:34.392: INFO: Pod "pod-configmaps-4e3b2afe-f95f-4c36-8fee-2ed3c601c54c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.442721ms Jan 14 12:41:36.399: INFO: Pod "pod-configmaps-4e3b2afe-f95f-4c36-8fee-2ed3c601c54c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016257423s Jan 14 12:41:38.407: INFO: Pod "pod-configmaps-4e3b2afe-f95f-4c36-8fee-2ed3c601c54c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024521415s �[1mSTEP�[0m: Saw pod success Jan 14 12:41:38.407: INFO: Pod "pod-configmaps-4e3b2afe-f95f-4c36-8fee-2ed3c601c54c" satisfied condition "Succeeded or Failed" Jan 14 12:41:38.414: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod pod-configmaps-4e3b2afe-f95f-4c36-8fee-2ed3c601c54c container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:41:38.450: INFO: Waiting for pod pod-configmaps-4e3b2afe-f95f-4c36-8fee-2ed3c601c54c to disappear Jan 14 12:41:38.458: INFO: Pod pod-configmaps-4e3b2afe-f95f-4c36-8fee-2ed3c601c54c no longer exists [AfterEach] [sig-node] Secrets test/e2e/framework/framework.go:188 Jan 14 12:41:38.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2873" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":169,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Aggregator test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:41:38.569: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator test/e2e/apimachinery/aggregator.go:79 Jan 14 12:41:38.613: INFO: >>> kubeConfig: /tmp/kubeconfig [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Registering the sample API server. Jan 14 12:41:39.383: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 14 12:41:41.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 12, 41, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 12, 41, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 12, 41, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 12, 41, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-bd4454f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 14 12:41:43.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 12, 41, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 12, 41, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 12, 41, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 12, 41, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-bd4454f8c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 14 12:41:45.629: INFO: Waited 131.265626ms for the sample-apiserver to be ready to handle requests. �[1mSTEP�[0m: Read Status for v1alpha1.wardle.example.com �[1mSTEP�[0m: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' �[1mSTEP�[0m: List APIServices Jan 14 12:41:45.791: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator test/e2e/apimachinery/aggregator.go:69 [AfterEach] [sig-api-machinery] Aggregator test/e2e/framework/framework.go:188 Jan 14 12:41:46.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-1257" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":13,"skipped":199,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:41:46.302: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status captures configMap creation �[1mSTEP�[0m: Deleting a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:188 Jan 14 12:42:14.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-168" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":14,"skipped":234,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:42:14.468: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services test/e2e/network/service.go:758 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating service in namespace services-209 �[1mSTEP�[0m: creating service affinity-nodeport-transition in namespace services-209 �[1mSTEP�[0m: creating replication controller affinity-nodeport-transition in namespace services-209 I0114 12:42:14.572459 15 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-209, replica count: 3 I0114 12:42:17.624069 15 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 14 12:42:17.642: INFO: Creating new exec pod Jan 14 12:42:20.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:22.980: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:22.980: INFO: stdout: "" Jan 14 12:42:23.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:26.237: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:26.237: INFO: stdout: "" Jan 14 12:42:26.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:29.284: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:29.285: INFO: stdout: "" Jan 14 12:42:29.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:32.253: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-transition 80+ \necho hostName\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:32.253: INFO: stdout: "" Jan 14 12:42:32.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:35.292: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:35.292: INFO: stdout: "" Jan 14 12:42:35.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:38.307: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:38.307: INFO: stdout: "" Jan 14 12:42:38.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:41.270: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:41.270: INFO: stdout: "" Jan 14 12:42:41.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:44.281: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:44.281: INFO: stdout: "" Jan 14 12:42:44.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:47.276: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:47.276: INFO: stdout: "" Jan 14 12:42:47.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:50.288: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:50.288: INFO: stdout: "" Jan 14 12:42:50.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:53.297: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:53.297: INFO: stdout: "" Jan 14 12:42:53.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:56.177: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:56.177: INFO: stdout: "" Jan 14 12:42:56.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:42:59.169: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:42:59.169: INFO: stdout: "" Jan 14 12:42:59.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:02.155: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:02.155: INFO: stdout: "" Jan 14 12:43:02.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:05.150: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:05.150: INFO: stdout: "" Jan 14 12:43:05.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:08.133: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:08.133: INFO: stdout: "" Jan 14 12:43:08.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:11.159: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:11.159: INFO: stdout: "" Jan 14 12:43:11.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:14.139: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:14.139: INFO: stdout: "" Jan 14 12:43:14.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:17.132: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:17.132: INFO: stdout: "" Jan 14 12:43:17.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:20.146: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:20.146: INFO: stdout: "" Jan 14 12:43:20.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:23.129: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:23.129: INFO: stdout: "" Jan 14 12:43:23.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:26.132: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:26.132: INFO: stdout: "" Jan 14 12:43:26.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:29.123: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:29.123: INFO: stdout: "" Jan 14 12:43:29.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:32.136: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-transition 80\n+ echo hostName\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:32.136: INFO: stdout: "" Jan 14 12:43:32.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:35.139: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-transition 80\n+ echo hostName\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:35.139: INFO: stdout: "" Jan 14 12:43:35.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:38.147: INFO: stderr: "+ + echo hostName\nnc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:38.147: INFO: stdout: "" Jan 14 12:43:38.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:41.167: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:41.167: INFO: stdout: "" Jan 14 12:43:41.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:44.140: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:44.141: INFO: stdout: "" Jan 14 12:43:44.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:47.126: INFO: stderr: "+ + ncecho -v hostName -t -w\n 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:47.126: INFO: stdout: "" Jan 14 12:43:47.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:50.129: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:50.129: INFO: stdout: "" Jan 14 12:43:50.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:53.131: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:53.131: INFO: stdout: "" Jan 14 12:43:53.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:56.132: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:56.132: INFO: stdout: "" Jan 14 12:43:56.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:43:59.174: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:43:59.174: INFO: stdout: "" Jan 14 12:43:59.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:02.134: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:02.134: INFO: stdout: "" Jan 14 12:44:02.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:05.124: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:05.124: INFO: stdout: "" Jan 14 12:44:05.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:08.172: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:08.172: INFO: stdout: "" Jan 14 12:44:08.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:11.120: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:11.120: INFO: stdout: "" Jan 14 12:44:11.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:14.139: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:14.139: INFO: stdout: "" Jan 14 12:44:14.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:17.144: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:17.144: INFO: stdout: "" Jan 14 12:44:17.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:20.126: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:20.126: INFO: stdout: "" Jan 14 12:44:20.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:23.150: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:23.150: INFO: stdout: "" Jan 14 12:44:23.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-209 exec execpod-affinity42c94 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:25.298: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:25.298: INFO: stdout: "" Jan 14 12:44:25.298: FAIL: Unexpected error: <*errors.errorString | 0xc002b7e4a0>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x7187d26?, {0x7a36e58, 0xc003719980}, 0xc003bcc500, 0x1) test/e2e/network/service.go:3771 +0x65d k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) test/e2e/network/service.go:3722 k8s.io/kubernetes/test/e2e/network.glob..func25.29() test/e2e/network/service.go:2153 +0x90 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25634d7?) test/e2e/e2e.go:130 +0x686 k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f040, 0x73bdd00) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f Jan 14 12:44:25.299: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-transition in namespace services-209, will wait for the garbage collector to delete the pods Jan 14 12:44:25.385: INFO: Deleting ReplicationController affinity-nodeport-transition took: 12.672168ms Jan 14 12:44:25.486: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.813354ms [AfterEach] [sig-network] Services test/e2e/framework/framework.go:188 Jan 14 12:44:27.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-209" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 �[91m�[1m• Failure [133.182 seconds]�[0m [sig-network] Services �[90mtest/e2e/network/common/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 14 12:44:25.298: Unexpected error: <*errors.errorString | 0xc002b7e4a0>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol occurred�[0m test/e2e/network/service.go:3771 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:40:30.811: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services test/e2e/network/service.go:758 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating service in namespace services-8155 Jan 14 12:40:30.946: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:40:32.953: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jan 14 12:40:32.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8155 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 14 12:40:33.270: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 14 12:40:33.270: INFO: stdout: "iptables" Jan 14 12:40:33.270: INFO: proxyMode: iptables Jan 14 12:40:33.285: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 14 12:40:33.293: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-nodeport-timeout in namespace services-8155 �[1mSTEP�[0m: creating replication controller affinity-nodeport-timeout in namespace services-8155 I0114 12:40:33.329916 21 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8155, replica count: 3 I0114 12:40:36.382003 21 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 14 12:40:36.407: INFO: Creating new exec pod Jan 14 12:40:39.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8155 exec execpod-affinitywhnks -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 14 12:40:39.737: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Jan 14 12:40:39.737: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 14 12:40:39.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8155 exec execpod-affinitywhnks -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.137.99.218 80' Jan 14 12:40:40.003: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.137.99.218 80\nConnection to 10.137.99.218 80 port [tcp/http] succeeded!\n" Jan 14 12:40:40.003: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 14 12:40:40.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8155 exec execpod-affinitywhnks -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 31598' Jan 14 12:40:40.295: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 31598\nConnection to 172.18.0.7 31598 port [tcp/*] succeeded!\n" Jan 14 12:40:40.295: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 14 12:40:40.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8155 exec execpod-affinitywhnks -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 31598' Jan 14 12:40:40.536: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 31598\nConnection to 172.18.0.5 31598 port [tcp/*] succeeded!\n" Jan 14 12:40:40.536: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 14 12:40:40.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8155 exec execpod-affinitywhnks -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31598/ ; done' Jan 14 12:40:40.963: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n" Jan 14 12:40:40.964: INFO: stdout: "\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9\naffinity-nodeport-timeout-pn9b9" Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Received response from host: affinity-nodeport-timeout-pn9b9 Jan 14 12:40:40.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8155 exec execpod-affinitywhnks -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:31598/' Jan 14 12:40:41.246: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n" Jan 14 12:40:41.246: INFO: stdout: "affinity-nodeport-timeout-pn9b9" Jan 14 12:41:01.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8155 exec execpod-affinitywhnks -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:31598/' Jan 14 12:44:35.144: INFO: rc: 56 Jan 14 12:44:35.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8155 exec execpod-affinitywhnks -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:31598/' Jan 14 12:44:35.315: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.7:31598/\n" Jan 14 12:44:35.315: INFO: stdout: "affinity-nodeport-timeout-nfh2b" Jan 14 12:44:35.315: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-timeout in namespace services-8155, will wait for the garbage collector to delete the pods Jan 14 12:44:35.396: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.814979ms Jan 14 12:44:35.497: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.581873ms [AfterEach] [sig-network] Services test/e2e/framework/framework.go:188 Jan 14 12:44:37.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8155" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 �[32m• [SLOW TEST:246.829 seconds]�[0m [sig-network] Services �[90mtest/e2e/network/common/framework.go:23�[0m should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:44:37.662: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:245 [It] should check if kubectl can dry-run update Pods [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 14 12:44:37.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3400 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Jan 14 12:44:37.782: INFO: stderr: "" Jan 14 12:44:37.782: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: replace the image in the pod with server-side dry-run Jan 14 12:44:37.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3400 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' Jan 14 12:44:38.823: INFO: stderr: "" Jan 14 12:44:38.823: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 14 12:44:38.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3400 delete pods e2e-test-httpd-pod' Jan 14 12:44:41.592: INFO: stderr: "" Jan 14 12:44:41.592: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:188 Jan 14 12:44:41.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3400" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":4,"skipped":29,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":252,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:44:27.655: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services test/e2e/network/service.go:758 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating service in namespace services-52 �[1mSTEP�[0m: creating service affinity-nodeport-transition in namespace services-52 �[1mSTEP�[0m: creating replication controller affinity-nodeport-transition in namespace services-52 I0114 12:44:27.705423 15 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-52, replica count: 3 I0114 12:44:30.756509 15 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 14 12:44:30.768: INFO: Creating new exec pod Jan 14 12:44:33.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:35.965: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:35.966: INFO: stdout: "" Jan 14 12:44:36.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:39.215: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:39.215: INFO: stdout: "" Jan 14 12:44:39.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:42.144: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:42.145: INFO: stdout: "" Jan 14 12:44:42.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:45.118: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:45.118: INFO: stdout: "" Jan 14 12:44:45.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:48.141: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:48.141: INFO: stdout: "" Jan 14 12:44:48.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:51.156: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:51.156: INFO: stdout: "" Jan 14 12:44:51.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:54.137: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:54.137: INFO: stdout: "" Jan 14 12:44:54.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:44:57.118: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:57.118: INFO: stdout: "" Jan 14 12:44:57.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:00.118: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:00.118: INFO: stdout: "" Jan 14 12:45:00.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:03.132: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:03.132: INFO: stdout: "" Jan 14 12:45:03.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:06.116: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:06.116: INFO: stdout: "" Jan 14 12:45:06.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:09.152: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:09.152: INFO: stdout: "" Jan 14 12:45:09.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:12.122: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:12.122: INFO: stdout: "" Jan 14 12:45:12.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:15.133: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:15.133: INFO: stdout: "" Jan 14 12:45:15.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:18.115: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:18.115: INFO: stdout: "" Jan 14 12:45:18.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:21.111: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:21.111: INFO: stdout: "" Jan 14 12:45:21.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:24.123: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:24.123: INFO: stdout: "" Jan 14 12:45:24.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:27.119: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:27.119: INFO: stdout: "" Jan 14 12:45:27.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:30.120: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:30.120: INFO: stdout: "" Jan 14 12:45:30.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:33.121: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:33.121: INFO: stdout: "" Jan 14 12:45:33.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:36.126: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:36.126: INFO: stdout: "" Jan 14 12:45:36.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:39.144: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:39.144: INFO: stdout: "" Jan 14 12:45:39.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:42.121: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:42.121: INFO: stdout: "" Jan 14 12:45:42.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:45.128: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:45.128: INFO: stdout: "" Jan 14 12:45:45.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:48.149: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:48.149: INFO: stdout: "" Jan 14 12:45:48.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:51.140: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:51.140: INFO: stdout: "" Jan 14 12:45:51.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:54.118: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:54.118: INFO: stdout: "" Jan 14 12:45:54.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:45:57.119: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:45:57.119: INFO: stdout: "" Jan 14 12:45:57.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:00.120: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:00.120: INFO: stdout: "" Jan 14 12:46:00.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:03.122: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:03.122: INFO: stdout: "" Jan 14 12:46:03.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:06.108: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:06.108: INFO: stdout: "" Jan 14 12:46:06.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:09.148: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:09.148: INFO: stdout: "" Jan 14 12:46:09.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:12.128: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:12.128: INFO: stdout: "" Jan 14 12:46:12.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:15.138: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:15.138: INFO: stdout: "" Jan 14 12:46:15.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:18.118: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:18.118: INFO: stdout: "" Jan 14 12:46:18.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:21.123: INFO: stderr: "+ + ncecho -v -t -w hostName 2\n affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:21.123: INFO: stdout: "" Jan 14 12:46:21.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:24.123: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:24.123: INFO: stdout: "" Jan 14 12:46:24.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:27.120: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:27.120: INFO: stdout: "" Jan 14 12:46:27.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:30.119: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:30.119: INFO: stdout: "" Jan 14 12:46:30.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:33.136: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:33.136: INFO: stdout: "" Jan 14 12:46:33.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:36.117: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:36.117: INFO: stdout: "" Jan 14 12:46:36.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-52 exec execpod-affinity9ttvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 14 12:46:38.288: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:46:38.288: INFO: stdout: "" Jan 14 12:46:38.288: FAIL: Unexpected error: <*errors.errorString | 0xc003492500>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x7187d26?, {0x7a36e58, 0xc003c0c300}, 0xc003c55180, 0x1) test/e2e/network/service.go:3771 +0x65d k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) test/e2e/network/service.go:3722 k8s.io/kubernetes/test/e2e/network.glob..func25.29() test/e2e/network/service.go:2153 +0x90 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25634d7?) test/e2e/e2e.go:130 +0x686 k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f040, 0x73bdd00) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f Jan 14 12:46:38.289: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-transition in namespace services-52, will wait for the garbage collector to delete the pods Jan 14 12:46:38.374: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.143466ms Jan 14 12:46:38.475: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.944655ms [AfterEach] [sig-network] Services test/e2e/framework/framework.go:188 Jan 14 12:46:40.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-52" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 �[91m�[1m• Failure [132.974 seconds]�[0m [sig-network] Services �[90mtest/e2e/network/common/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 14 12:46:38.288: Unexpected error: <*errors.errorString | 0xc003492500>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol occurred�[0m test/e2e/network/service.go:3771 �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:40:09.662: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:245 [It] should create and stop a working application [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating all guestbook components Jan 14 12:40:09.832: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 14 12:40:09.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 create -f -' Jan 14 12:40:17.640: INFO: stderr: "" Jan 14 12:40:17.641: INFO: stdout: "service/agnhost-replica created\n" Jan 14 12:40:17.641: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 14 12:40:17.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 create -f -' Jan 14 12:40:19.403: INFO: stderr: "" Jan 14 12:40:19.403: INFO: stdout: "service/agnhost-primary created\n" Jan 14 12:40:19.405: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 14 12:40:19.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 create -f -' Jan 14 12:40:25.216: INFO: stderr: "" Jan 14 12:40:25.216: INFO: stdout: "service/frontend created\n" Jan 14 12:40:25.216: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 14 12:40:25.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 create -f -' Jan 14 12:40:26.877: INFO: stderr: "" Jan 14 12:40:26.878: INFO: stdout: "deployment.apps/frontend created\n" Jan 14 12:40:26.878: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 14 12:40:26.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 create -f -' Jan 14 12:40:29.215: INFO: stderr: "" Jan 14 12:40:29.215: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 14 12:40:29.216: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 14 12:40:29.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 create -f -' Jan 14 12:40:30.016: INFO: stderr: "" Jan 14 12:40:30.016: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 14 12:40:30.016: INFO: Waiting for all frontend pods to be Running. Jan 14 12:40:35.070: INFO: Waiting for frontend to serve content. Jan 14 12:44:08.515: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s� � �v1��Status��� � �������Failure�ierror trying to reach service: read tcp 172.18.0.9:56518->192.168.2.34:80: read: connection reset by peer"�ServiceUnavailable0����"� Jan 14 12:44:13.525: INFO: Trying to add a new entry to the guestbook. Jan 14 12:47:47.651: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s� � �v1��Status��� � �������Failure�ierror trying to reach service: read tcp 172.18.0.9:38846->192.168.2.34:80: read: connection reset by peer"�ServiceUnavailable0����"� Jan 14 12:47:52.652: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2() test/e2e/kubectl/kubectl.go:376 +0x147 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25634d7?) test/e2e/e2e.go:130 +0x686 k8s.io/kubernetes/test/e2e.TestE2E(0x24d4cd9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0001df380, 0x73bdd00) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:47:52.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 delete --grace-period=0 --force -f -' Jan 14 12:47:52.748: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:47:52.748: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:47:52.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 delete --grace-period=0 --force -f -' Jan 14 12:47:52.885: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:47:52.885: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:47:52.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 delete --grace-period=0 --force -f -' Jan 14 12:47:52.983: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:47:52.983: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:47:52.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 delete --grace-period=0 --force -f -' Jan 14 12:47:53.064: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:47:53.064: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:47:53.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 delete --grace-period=0 --force -f -' Jan 14 12:47:53.190: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:47:53.190: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:47:53.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5763 delete --grace-period=0 --force -f -' Jan 14 12:47:53.351: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:47:53.351: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:188 Jan 14 12:47:53.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5763" for this suite. �[91m�[1m• Failure [463.704 seconds]�[0m [sig-cli] Kubectl client �[90mtest/e2e/kubectl/framework.go:23�[0m Guestbook application �[90mtest/e2e/kubectl/kubectl.go:340�[0m �[91m�[1mshould create and stop a working application [Conformance] [It]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 14 12:47:52.652: Cannot added new entry in 180 seconds.�[0m test/e2e/kubectl/kubectl.go:376 �[90m------------------------------�[0m [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:44:41.618: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services test/e2e/network/service.go:758 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating service in namespace services-2784 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-2784 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-2784 I0114 12:44:41.655934 21 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-2784, replica count: 3 I0114 12:44:44.707140 21 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 14 12:44:44.712: INFO: Creating new exec pod Jan 14 12:44:47.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2784 exec execpod-affinitysq5wj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 14 12:44:47.946: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:44:47.946: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 14 12:44:47.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2784 exec execpod-affinitysq5wj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.139.5.153 80' Jan 14 12:44:48.135: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.139.5.153 80\nConnection to 10.139.5.153 80 port [tcp/http] succeeded!\n" Jan 14 12:44:48.135: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 14 12:44:48.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2784 exec execpod-affinitysq5wj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.139.5.153:80/ ; done' Jan 14 12:45:06.440: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n" Jan 14 12:45:06.440: INFO: stdout: "\n\n\naffinity-clusterip-transition-67gnk\naffinity-clusterip-transition-z4zn8\naffinity-clusterip-transition-67gnk\naffinity-clusterip-transition-z4zn8\n\n\n\n\n\n\naffinity-clusterip-transition-z4zn8\naffinity-clusterip-transition-67gnk\n\naffinity-clusterip-transition-67gnk" Jan 14 12:45:06.440: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:45:06.440: INFO: Received response from host: affinity-clusterip-transition-z4zn8 Jan 14 12:45:06.440: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:45:06.440: INFO: Received response from host: affinity-clusterip-transition-z4zn8 Jan 14 12:45:06.440: INFO: Received response from host: affinity-clusterip-transition-z4zn8 Jan 14 12:45:06.440: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:45:06.440: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:45:36.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2784 exec execpod-affinitysq5wj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.139.5.153:80/ ; done' Jan 14 12:45:54.727: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n" Jan 14 12:45:54.727: INFO: stdout: "\naffinity-clusterip-transition-67gnk\n\n\n\n\n\n\naffinity-clusterip-transition-67gnk\naffinity-clusterip-transition-67gnk\naffinity-clusterip-transition-67gnk\n\n\naffinity-clusterip-transition-67gnk\n\naffinity-clusterip-transition-67gnk\naffinity-clusterip-transition-67gnk" Jan 14 12:45:54.727: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:45:54.727: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:45:54.727: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:45:54.727: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:45:54.727: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:45:54.727: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:45:54.727: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:46:06.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2784 exec execpod-affinitysq5wj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.139.5.153:80/ ; done' Jan 14 12:46:20.695: INFO: rc: 28 Jan 14 12:46:20.695: INFO: Failed to get response from 10.139.5.153:80. Retry until timeout Jan 14 12:46:36.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2784 exec execpod-affinitysq5wj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.139.5.153:80/ ; done' Jan 14 12:46:50.723: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.139.5.153:80/\n" Jan 14 12:46:50.723: INFO: stdout: "\naffinity-clusterip-transition-z4zn8\n\n\naffinity-clusterip-transition-67gnk\naffinity-clusterip-transition-67gnk\n\n\naffinity-clusterip-transition-67gnk\n\n\naffinity-clusterip-transition-67gnk\naffinity-clusterip-transition-z4zn8\naffinity-clusterip-transition-67gnk\n\naffinity-clusterip-transition-67gnk\naffinity-clusterip-transition-67gnk" Jan 14 12:46:50.724: INFO: Received response from host: affinity-clusterip-transition-z4zn8 Jan 14 12:46:50.724: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:46:50.724: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:46:50.724: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:46:50.724: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:46:50.724: INFO: Received response from host: affinity-clusterip-transition-z4zn8 Jan 14 12:46:50.724: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:46:50.724: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:46:50.724: INFO: Received response from host: affinity-clusterip-transition-67gnk Jan 14 12:46:50.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2784 exec execpod-affinitysq5wj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.139.5.153:80/ ; done' Jan 14 12:47:23.039: INFO: rc: 28 Jan 14 12:47:23.039: INFO: Failed to get response from 10.139.5.153:80. Retry until timeout Jan 14 12:47:53.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2784 exec execpod-affinitysq5wj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.139.5.153:80/ ; done' Jan 14 12:48:25.401: INFO: rc: 28 Jan 14 12:48:25.401: INFO: Failed to get response from 10.139.5.153:80. Retry until timeout Jan 14 12:48:53.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2784 exec execpod-affinitysq5wj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.139.5.153:80/ ; done' Jan 14 12:49:25.298: INFO: rc: 28 Jan 14 12:49:25.298: INFO: Failed to get response from 10.139.5.153:80. Retry until timeout Jan 14 12:49:25.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2784 exec execpod-affinitysq5wj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.139.5.153:80/ ; done' Jan 14 12:49:57.538: INFO: rc: 28 Jan 14 12:49:57.538: INFO: Failed to get response from 10.139.5.153:80. Retry until timeout Jan 14 12:49:57.538: INFO: [] Jan 14 12:49:57.539: FAIL: Connection timed out or not enough responses. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity({0x7a36e58?, 0xc003470f00?}, 0x0?, {0xc000691600?, 0x0?}, 0x0?, 0x1) test/e2e/network/service.go:210 +0x225 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x718f1d3?, {0x7a36e58, 0xc003470f00}, 0xc0008c1400, 0x1) test/e2e/network/service.go:3786 +0x7c5 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) test/e2e/network/service.go:3722 k8s.io/kubernetes/test/e2e/network.glob..func25.26() test/e2e/network/service.go:2105 +0x90 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25634d7?) test/e2e/e2e.go:130 +0x686 k8s.io/kubernetes/test/e2e.TestE2E(0x24d4cd9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000503d40, 0x73bdd00) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f Jan 14 12:49:57.539: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-2784, will wait for the garbage collector to delete the pods Jan 14 12:49:57.620: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.94458ms Jan 14 12:49:57.720: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.429866ms [AfterEach] [sig-network] Services test/e2e/framework/framework.go:188 Jan 14 12:49:59.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2784" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 �[91m�[1m• Failure [318.155 seconds]�[0m [sig-network] Services �[90mtest/e2e/network/common/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 14 12:49:57.539: Connection timed out or not enough responses.�[0m test/e2e/network/service.go:210 �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":10,"skipped":181,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:47:53.370: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:245 [It] should create and stop a working application [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating all guestbook components Jan 14 12:47:53.399: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 14 12:47:53.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 create -f -' Jan 14 12:47:53.677: INFO: stderr: "" Jan 14 12:47:53.677: INFO: stdout: "service/agnhost-replica created\n" Jan 14 12:47:53.678: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 14 12:47:53.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 create -f -' Jan 14 12:47:53.893: INFO: stderr: "" Jan 14 12:47:53.893: INFO: stdout: "service/agnhost-primary created\n" Jan 14 12:47:53.893: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 14 12:47:53.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 create -f -' Jan 14 12:47:54.142: INFO: stderr: "" Jan 14 12:47:54.142: INFO: stdout: "service/frontend created\n" Jan 14 12:47:54.142: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 14 12:47:54.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 create -f -' Jan 14 12:47:54.331: INFO: stderr: "" Jan 14 12:47:54.331: INFO: stdout: "deployment.apps/frontend created\n" Jan 14 12:47:54.331: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 14 12:47:54.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 create -f -' Jan 14 12:47:54.553: INFO: stderr: "" Jan 14 12:47:54.553: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 14 12:47:54.553: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 14 12:47:54.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 create -f -' Jan 14 12:47:54.774: INFO: stderr: "" Jan 14 12:47:54.775: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 14 12:47:54.775: INFO: Waiting for all frontend pods to be Running. Jan 14 12:47:59.828: INFO: Waiting for frontend to serve content. Jan 14 12:47:59.837: INFO: Trying to add a new entry to the guestbook. Jan 14 12:51:32.930: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s� � �v1��Status��� � �������Failure�ierror trying to reach service: read tcp 172.18.0.9:48126->192.168.2.57:80: read: connection reset by peer"�ServiceUnavailable0����"� Jan 14 12:51:37.931: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2() test/e2e/kubectl/kubectl.go:376 +0x147 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25634d7?) test/e2e/e2e.go:130 +0x686 k8s.io/kubernetes/test/e2e.TestE2E(0x24d4cd9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0001df380, 0x73bdd00) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:51:37.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 delete --grace-period=0 --force -f -' Jan 14 12:51:38.030: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:51:38.030: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:51:38.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 delete --grace-period=0 --force -f -' Jan 14 12:51:38.187: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:51:38.187: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:51:38.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 delete --grace-period=0 --force -f -' Jan 14 12:51:38.266: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:51:38.267: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:51:38.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 delete --grace-period=0 --force -f -' Jan 14 12:51:38.338: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:51:38.338: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:51:38.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 delete --grace-period=0 --force -f -' Jan 14 12:51:38.451: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:51:38.451: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:51:38.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6164 delete --grace-period=0 --force -f -' Jan 14 12:51:38.587: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:51:38.587: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:188 Jan 14 12:51:38.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-6164" for this suite. �[91m�[1m• Failure [225.233 seconds]�[0m [sig-cli] Kubectl client �[90mtest/e2e/kubectl/framework.go:23�[0m Guestbook application �[90mtest/e2e/kubectl/kubectl.go:340�[0m �[91m�[1mshould create and stop a working application [Conformance] [It]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 14 12:51:37.931: Cannot added new entry in 180 seconds.�[0m test/e2e/kubectl/kubectl.go:376 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":39,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:49:59.776: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services test/e2e/network/service.go:758 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating service in namespace services-9187 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-9187 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-9187 I0114 12:49:59.820413 21 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-9187, replica count: 3 I0114 12:50:02.871334 21 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 14 12:50:02.879: INFO: Creating new exec pod Jan 14 12:50:05.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9187 exec execpod-affinity5dlzq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 14 12:50:06.049: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 14 12:50:06.049: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 14 12:50:06.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9187 exec execpod-affinity5dlzq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.128.44.107 80' Jan 14 12:50:06.193: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.128.44.107 80\nConnection to 10.128.44.107 80 port [tcp/http] succeeded!\n" Jan 14 12:50:06.193: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 14 12:50:06.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9187 exec execpod-affinity5dlzq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.128.44.107:80/ ; done' Jan 14 12:50:20.454: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n" Jan 14 12:50:20.454: INFO: stdout: "\n\n\naffinity-clusterip-transition-t8pg4\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\n\n\naffinity-clusterip-transition-t8pg4\naffinity-clusterip-transition-bgtvj\n\naffinity-clusterip-transition-t8pg4\n\n\naffinity-clusterip-transition-t8pg4\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-t8pg4" Jan 14 12:50:20.454: INFO: Received response from host: affinity-clusterip-transition-t8pg4 Jan 14 12:50:20.454: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:50:20.454: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:50:20.454: INFO: Received response from host: affinity-clusterip-transition-t8pg4 Jan 14 12:50:20.454: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:50:20.454: INFO: Received response from host: affinity-clusterip-transition-t8pg4 Jan 14 12:50:20.454: INFO: Received response from host: affinity-clusterip-transition-t8pg4 Jan 14 12:50:20.454: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:50:20.454: INFO: Received response from host: affinity-clusterip-transition-t8pg4 Jan 14 12:50:50.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9187 exec execpod-affinity5dlzq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.128.44.107:80/ ; done' Jan 14 12:51:06.759: INFO: rc: 28 Jan 14 12:51:06.759: INFO: Failed to get response from 10.128.44.107:80. Retry until timeout Jan 14 12:51:20.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9187 exec execpod-affinity5dlzq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.128.44.107:80/ ; done' Jan 14 12:51:30.684: INFO: rc: 28 Jan 14 12:51:30.684: INFO: Failed to get response from 10.128.44.107:80. Retry until timeout Jan 14 12:51:50.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9187 exec execpod-affinity5dlzq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.128.44.107:80/ ; done' Jan 14 12:52:00.688: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n" Jan 14 12:52:00.688: INFO: stdout: "\n\naffinity-clusterip-transition-t8pg4\naffinity-clusterip-transition-bgtvj\n\n\naffinity-clusterip-transition-t8pg4\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-t8pg4\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-t8pg4\n\n\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-t8pg4\naffinity-clusterip-transition-bgtvj" Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-t8pg4 Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-t8pg4 Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-t8pg4 Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-t8pg4 Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-t8pg4 Jan 14 12:52:00.688: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9187 exec execpod-affinity5dlzq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.128.44.107:80/ ; done' Jan 14 12:52:00.971: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.128.44.107:80/\n" Jan 14 12:52:00.971: INFO: stdout: "\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj\naffinity-clusterip-transition-bgtvj" Jan 14 12:52:00.971: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.971: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.971: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.971: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.971: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.971: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.971: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.971: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.972: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.972: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.972: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.972: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.972: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.972: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.972: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.972: INFO: Received response from host: affinity-clusterip-transition-bgtvj Jan 14 12:52:00.972: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-9187, will wait for the garbage collector to delete the pods Jan 14 12:52:01.047: INFO: Deleting ReplicationController affinity-clusterip-transition took: 9.296579ms Jan 14 12:52:01.147: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.47391ms [AfterEach] [sig-network] Services test/e2e/framework/framework.go:188 Jan 14 12:52:03.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9187" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762 �[32m• [SLOW TEST:123.709 seconds]�[0m [sig-network] Services �[90mtest/e2e/network/common/framework.go:23�[0m should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":39,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:03.509: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-daedaa79-bbbc-4048-9d98-8be528840f98 �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-dde07032-119f-48cb-91d5-bb57ebbb1c86 �[1mSTEP�[0m: Creating the pod Jan 14 12:52:03.558: INFO: The status of Pod pod-projected-secrets-df691dfb-8433-4635-b9df-6f860adecd7a is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:52:05.563: INFO: The status of Pod pod-projected-secrets-df691dfb-8433-4635-b9df-6f860adecd7a is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-daedaa79-bbbc-4048-9d98-8be528840f98 �[1mSTEP�[0m: Updating secret s-test-opt-upd-dde07032-119f-48cb-91d5-bb57ebbb1c86 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-bf8d9ce2-4c7a-4fa6-a974-b6a8812d7ef4 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret test/e2e/framework/framework.go:188 Jan 14 12:52:07.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1995" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":53,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:07.660: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide container's memory request [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 14 12:52:07.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b37b620-026b-47cd-8642-85d7b869b9e4" in namespace "downward-api-8369" to be "Succeeded or Failed" Jan 14 12:52:07.690: INFO: Pod "downwardapi-volume-1b37b620-026b-47cd-8642-85d7b869b9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.918267ms Jan 14 12:52:09.696: INFO: Pod "downwardapi-volume-1b37b620-026b-47cd-8642-85d7b869b9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008223871s Jan 14 12:52:11.701: INFO: Pod "downwardapi-volume-1b37b620-026b-47cd-8642-85d7b869b9e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013363789s �[1mSTEP�[0m: Saw pod success Jan 14 12:52:11.701: INFO: Pod "downwardapi-volume-1b37b620-026b-47cd-8642-85d7b869b9e4" satisfied condition "Succeeded or Failed" Jan 14 12:52:11.705: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod downwardapi-volume-1b37b620-026b-47cd-8642-85d7b869b9e4 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:52:11.741: INFO: Waiting for pod downwardapi-volume-1b37b620-026b-47cd-8642-85d7b869b9e4 to disappear Jan 14 12:52:11.746: INFO: Pod downwardapi-volume-1b37b620-026b-47cd-8642-85d7b869b9e4 no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 Jan 14 12:52:11.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8369" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":59,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:11.831: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating secret with name secret-test-map-ad09acba-d4d4-48b7-b753-1a384047c959 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 14 12:52:11.871: INFO: Waiting up to 5m0s for pod "pod-secrets-596b123e-516f-45f6-9c91-7b64d3f2c453" in namespace "secrets-4626" to be "Succeeded or Failed" Jan 14 12:52:11.877: INFO: Pod "pod-secrets-596b123e-516f-45f6-9c91-7b64d3f2c453": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109251ms Jan 14 12:52:13.882: INFO: Pod "pod-secrets-596b123e-516f-45f6-9c91-7b64d3f2c453": Phase="Running", Reason="", readiness=false. Elapsed: 2.009265481s Jan 14 12:52:15.885: INFO: Pod "pod-secrets-596b123e-516f-45f6-9c91-7b64d3f2c453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012736294s �[1mSTEP�[0m: Saw pod success Jan 14 12:52:15.885: INFO: Pod "pod-secrets-596b123e-516f-45f6-9c91-7b64d3f2c453" satisfied condition "Succeeded or Failed" Jan 14 12:52:15.888: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod pod-secrets-596b123e-516f-45f6-9c91-7b64d3f2c453 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:52:15.901: INFO: Waiting for pod pod-secrets-596b123e-516f-45f6-9c91-7b64d3f2c453 to disappear Jan 14 12:52:15.904: INFO: Pod pod-secrets-596b123e-516f-45f6-9c91-7b64d3f2c453 no longer exists [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:188 Jan 14 12:52:15.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-4626" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":96,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:15.969: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating projection with secret that has name secret-emptykey-test-1ab75083-64b7-4ea2-ac5d-c5315b7ec32b [AfterEach] [sig-node] Secrets test/e2e/framework/framework.go:188 Jan 14 12:52:15.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-5549" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":9,"skipped":143,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:16.011: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 14 12:52:19.056: INFO: Expected: &{OK} to match Container's Termination Message: OK -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime test/e2e/framework/framework.go:188 Jan 14 12:52:19.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-563" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":148,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:19.119: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 14 12:52:19.147: INFO: Waiting up to 5m0s for pod "downward-api-8d34621c-6ee1-46b9-aea6-5b4124a8973b" in namespace "downward-api-1070" to be "Succeeded or Failed" Jan 14 12:52:19.151: INFO: Pod "downward-api-8d34621c-6ee1-46b9-aea6-5b4124a8973b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094245ms Jan 14 12:52:21.156: INFO: Pod "downward-api-8d34621c-6ee1-46b9-aea6-5b4124a8973b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008592969s Jan 14 12:52:23.159: INFO: Pod "downward-api-8d34621c-6ee1-46b9-aea6-5b4124a8973b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012050563s �[1mSTEP�[0m: Saw pod success Jan 14 12:52:23.159: INFO: Pod "downward-api-8d34621c-6ee1-46b9-aea6-5b4124a8973b" satisfied condition "Succeeded or Failed" Jan 14 12:52:23.162: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod downward-api-8d34621c-6ee1-46b9-aea6-5b4124a8973b container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:52:23.176: INFO: Waiting for pod downward-api-8d34621c-6ee1-46b9-aea6-5b4124a8973b to disappear Jan 14 12:52:23.178: INFO: Pod downward-api-8d34621c-6ee1-46b9-aea6-5b4124a8973b no longer exists [AfterEach] [sig-node] Downward API test/e2e/framework/framework.go:188 Jan 14 12:52:23.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1070" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":178,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:23.195: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-f1afced6-3ebe-4d63-be10-690fdaa56a15 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 14 12:52:23.226: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9490e29-efe8-40d0-8571-1ad17af40de9" in namespace "configmap-9919" to be "Succeeded or Failed" Jan 14 12:52:23.229: INFO: Pod "pod-configmaps-c9490e29-efe8-40d0-8571-1ad17af40de9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.878067ms Jan 14 12:52:25.234: INFO: Pod "pod-configmaps-c9490e29-efe8-40d0-8571-1ad17af40de9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007454888s Jan 14 12:52:27.239: INFO: Pod "pod-configmaps-c9490e29-efe8-40d0-8571-1ad17af40de9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012132884s �[1mSTEP�[0m: Saw pod success Jan 14 12:52:27.239: INFO: Pod "pod-configmaps-c9490e29-efe8-40d0-8571-1ad17af40de9" satisfied condition "Succeeded or Failed" Jan 14 12:52:27.242: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod pod-configmaps-c9490e29-efe8-40d0-8571-1ad17af40de9 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:52:27.257: INFO: Waiting for pod pod-configmaps-c9490e29-efe8-40d0-8571-1ad17af40de9 to disappear Jan 14 12:52:27.259: INFO: Pod pod-configmaps-c9490e29-efe8-40d0-8571-1ad17af40de9 no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 Jan 14 12:52:27.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9919" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":183,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:27.273: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:52:29.308: INFO: Deleting pod "var-expansion-9cc3fc43-3615-4905-a193-e9e05f1f5f1c" in namespace "var-expansion-3388" Jan 14 12:52:29.313: INFO: Wait up to 5m0s for pod "var-expansion-9cc3fc43-3615-4905-a193-e9e05f1f5f1c" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 14 12:52:31.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-3388" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":13,"skipped":187,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:31.337: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-e2387dbb-369d-44d9-92cf-65b435bd6c4a �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 14 12:52:31.369: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c551dd3d-0ccf-461c-be56-8dade17b2d7a" in namespace "projected-3734" to be "Succeeded or Failed" Jan 14 12:52:31.372: INFO: Pod "pod-projected-configmaps-c551dd3d-0ccf-461c-be56-8dade17b2d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.247212ms Jan 14 12:52:33.378: INFO: Pod "pod-projected-configmaps-c551dd3d-0ccf-461c-be56-8dade17b2d7a": Phase="Running", Reason="", readiness=false. Elapsed: 2.008854682s Jan 14 12:52:35.382: INFO: Pod "pod-projected-configmaps-c551dd3d-0ccf-461c-be56-8dade17b2d7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013220312s �[1mSTEP�[0m: Saw pod success Jan 14 12:52:35.382: INFO: Pod "pod-projected-configmaps-c551dd3d-0ccf-461c-be56-8dade17b2d7a" satisfied condition "Succeeded or Failed" Jan 14 12:52:35.386: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod pod-projected-configmaps-c551dd3d-0ccf-461c-be56-8dade17b2d7a container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:52:35.399: INFO: Waiting for pod pod-projected-configmaps-c551dd3d-0ccf-461c-be56-8dade17b2d7a to disappear Jan 14 12:52:35.402: INFO: Pod pod-projected-configmaps-c551dd3d-0ccf-461c-be56-8dade17b2d7a no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 Jan 14 12:52:35.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3734" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":189,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:35.428: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 14 12:52:35.459: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bce593d7-86d6-4614-aea7-a25e0733b6ad" in namespace "downward-api-6606" to be "Succeeded or Failed" Jan 14 12:52:35.463: INFO: Pod "downwardapi-volume-bce593d7-86d6-4614-aea7-a25e0733b6ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.623861ms Jan 14 12:52:37.466: INFO: Pod "downwardapi-volume-bce593d7-86d6-4614-aea7-a25e0733b6ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007316624s Jan 14 12:52:39.470: INFO: Pod "downwardapi-volume-bce593d7-86d6-4614-aea7-a25e0733b6ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010872011s �[1mSTEP�[0m: Saw pod success Jan 14 12:52:39.470: INFO: Pod "downwardapi-volume-bce593d7-86d6-4614-aea7-a25e0733b6ad" satisfied condition "Succeeded or Failed" Jan 14 12:52:39.472: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod downwardapi-volume-bce593d7-86d6-4614-aea7-a25e0733b6ad container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:52:39.487: INFO: Waiting for pod downwardapi-volume-bce593d7-86d6-4614-aea7-a25e0733b6ad to disappear Jan 14 12:52:39.489: INFO: Pod downwardapi-volume-bce593d7-86d6-4614-aea7-a25e0733b6ad no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 Jan 14 12:52:39.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6606" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":200,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":10,"skipped":181,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:51:38.605: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:245 [It] should create and stop a working application [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating all guestbook components Jan 14 12:51:38.638: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 14 12:51:38.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 create -f -' Jan 14 12:51:38.922: INFO: stderr: "" Jan 14 12:51:38.922: INFO: stdout: "service/agnhost-replica created\n" Jan 14 12:51:38.922: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 14 12:51:38.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 create -f -' Jan 14 12:51:39.134: INFO: stderr: "" Jan 14 12:51:39.134: INFO: stdout: "service/agnhost-primary created\n" Jan 14 12:51:39.134: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 14 12:51:39.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 create -f -' Jan 14 12:51:39.348: INFO: stderr: "" Jan 14 12:51:39.348: INFO: stdout: "service/frontend created\n" Jan 14 12:51:39.348: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 14 12:51:39.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 create -f -' Jan 14 12:51:39.543: INFO: stderr: "" Jan 14 12:51:39.543: INFO: stdout: "deployment.apps/frontend created\n" Jan 14 12:51:39.543: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 14 12:51:39.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 create -f -' Jan 14 12:51:39.770: INFO: stderr: "" Jan 14 12:51:39.771: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 14 12:51:39.771: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 14 12:51:39.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 create -f -' Jan 14 12:51:39.996: INFO: stderr: "" Jan 14 12:51:39.996: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 14 12:51:39.996: INFO: Waiting for all frontend pods to be Running. Jan 14 12:51:45.049: INFO: Waiting for frontend to serve content. Jan 14 12:51:45.059: INFO: Trying to add a new entry to the guestbook. Jan 14 12:51:50.070: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 14 12:52:00.080: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 14 12:52:10.091: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 14 12:52:20.103: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 14 12:52:30.115: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 14 12:52:35.127: INFO: Verifying that added entry can be retrieved. Jan 14 12:52:40.135: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:52:45.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 delete --grace-period=0 --force -f -' Jan 14 12:52:45.234: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:52:45.234: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:52:45.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 delete --grace-period=0 --force -f -' Jan 14 12:52:45.355: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:52:45.355: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:52:45.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 delete --grace-period=0 --force -f -' Jan 14 12:52:45.441: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:52:45.441: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:52:45.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 delete --grace-period=0 --force -f -' Jan 14 12:52:45.513: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:52:45.513: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:52:45.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 delete --grace-period=0 --force -f -' Jan 14 12:52:45.641: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:52:45.641: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 14 12:52:45.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-473 delete --grace-period=0 --force -f -' Jan 14 12:52:45.785: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 14 12:52:45.785: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:188 Jan 14 12:52:45.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-473" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":11,"skipped":181,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:45.851: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/crd_conversion_webhook.go:128 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 14 12:52:46.749: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 14 12:52:49.777: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:52:49.780: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: Create a v2 custom resource �[1mSTEP�[0m: List CRs in v1 �[1mSTEP�[0m: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jan 14 12:52:52.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-6702" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/crd_conversion_webhook.go:139 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":12,"skipped":203,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:53.049: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating secret with name secret-test-map-ed785e11-bb51-4c0f-81b3-eae7e51049f6 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 14 12:52:53.115: INFO: Waiting up to 5m0s for pod "pod-secrets-43309add-e3f1-4752-8eda-356c2b99d2c9" in namespace "secrets-1786" to be "Succeeded or Failed" Jan 14 12:52:53.121: INFO: Pod "pod-secrets-43309add-e3f1-4752-8eda-356c2b99d2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.537488ms Jan 14 12:52:55.126: INFO: Pod "pod-secrets-43309add-e3f1-4752-8eda-356c2b99d2c9": Phase="Running", Reason="", readiness=false. Elapsed: 2.010685948s Jan 14 12:52:57.129: INFO: Pod "pod-secrets-43309add-e3f1-4752-8eda-356c2b99d2c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01420264s �[1mSTEP�[0m: Saw pod success Jan 14 12:52:57.129: INFO: Pod "pod-secrets-43309add-e3f1-4752-8eda-356c2b99d2c9" satisfied condition "Succeeded or Failed" Jan 14 12:52:57.132: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod pod-secrets-43309add-e3f1-4752-8eda-356c2b99d2c9 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:52:57.144: INFO: Waiting for pod pod-secrets-43309add-e3f1-4752-8eda-356c2b99d2c9 to disappear Jan 14 12:52:57.147: INFO: Pod pod-secrets-43309add-e3f1-4752-8eda-356c2b99d2c9 no longer exists [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:188 Jan 14 12:52:57.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1786" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":207,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:57.231: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test substitution in container's command Jan 14 12:52:57.257: INFO: Waiting up to 5m0s for pod "var-expansion-84e8666a-4d5c-4afb-8267-dce847f9226d" in namespace "var-expansion-7622" to be "Succeeded or Failed" Jan 14 12:52:57.260: INFO: Pod "var-expansion-84e8666a-4d5c-4afb-8267-dce847f9226d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.850242ms Jan 14 12:52:59.266: INFO: Pod "var-expansion-84e8666a-4d5c-4afb-8267-dce847f9226d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00910203s Jan 14 12:53:01.273: INFO: Pod "var-expansion-84e8666a-4d5c-4afb-8267-dce847f9226d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015270527s �[1mSTEP�[0m: Saw pod success Jan 14 12:53:01.273: INFO: Pod "var-expansion-84e8666a-4d5c-4afb-8267-dce847f9226d" satisfied condition "Succeeded or Failed" Jan 14 12:53:01.277: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod var-expansion-84e8666a-4d5c-4afb-8267-dce847f9226d container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:53:01.297: INFO: Waiting for pod var-expansion-84e8666a-4d5c-4afb-8267-dce847f9226d to disappear Jan 14 12:53:01.300: INFO: Pod var-expansion-84e8666a-4d5c-4afb-8267-dce847f9226d no longer exists [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 14 12:53:01.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-7622" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":264,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:39:28.043: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-7422 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 14 12:39:28.141: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 14 12:39:28.292: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:39:30.303: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:39:32.299: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:39:34.299: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:39:36.299: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:39:38.297: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:39:40.298: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:39:42.299: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:39:44.300: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:39:46.300: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:39:48.302: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:39:50.300: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 14 12:39:50.309: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 14 12:39:50.319: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 14 12:39:50.326: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 14 12:39:52.390: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 14 12:39:52.391: INFO: Going to poll 192.168.1.9 on port 8083 at least 0 times, with a maximum of 46 tries before failing Jan 14 12:39:52.395: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.1.9:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:39:52.395: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:39:52.398: INFO: ExecWithOptions: Clientset creation Jan 14 12:39:52.398: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.1.9%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:39:52.546: INFO: Found all 1 expected endpoints: [netserver-0] Jan 14 12:39:52.546: INFO: Going to poll 192.168.0.12 on port 8083 at least 0 times, with a maximum of 46 tries before failing Jan 14 12:39:52.552: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.0.12:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:39:52.552: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:39:52.553: INFO: ExecWithOptions: Clientset creation Jan 14 12:39:52.553: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.0.12%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:39:52.682: INFO: Found all 1 expected endpoints: [netserver-1] Jan 14 12:39:52.682: INFO: Going to poll 192.168.6.12 on port 8083 at least 0 times, with a maximum of 46 tries before failing Jan 14 12:39:52.691: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.6.12:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:39:52.691: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:39:52.693: INFO: ExecWithOptions: Clientset creation Jan 14 12:39:52.693: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.6.12%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:39:52.831: INFO: Found all 1 expected endpoints: [netserver-2] Jan 14 12:39:52.831: INFO: Going to poll 192.168.2.7 on port 8083 at least 0 times, with a maximum of 46 tries before failing Jan 14 12:39:52.836: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:39:52.837: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:39:52.838: INFO: ExecWithOptions: Clientset creation Jan 14 12:39:52.838: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:40:08.064: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:40:08.064: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:40:10.134: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:40:10.134: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:40:10.135: INFO: ExecWithOptions: Clientset creation Jan 14 12:40:10.136: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:40:25.784: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:40:25.784: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:40:27.857: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:40:27.857: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:40:27.858: INFO: ExecWithOptions: Clientset creation Jan 14 12:40:27.858: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:40:43.395: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:40:43.395: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:40:45.428: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:40:45.428: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:40:45.430: INFO: ExecWithOptions: Clientset creation Jan 14 12:40:45.430: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:41:01.854: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:41:01.854: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:41:03.862: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:41:03.862: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:41:03.863: INFO: ExecWithOptions: Clientset creation Jan 14 12:41:03.863: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:41:19.042: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:41:19.043: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:41:21.113: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:41:21.113: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:41:21.114: INFO: ExecWithOptions: Clientset creation Jan 14 12:41:21.115: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:41:36.514: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:41:36.514: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:41:38.519: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:41:38.519: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:41:38.521: INFO: ExecWithOptions: Clientset creation Jan 14 12:41:38.521: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:41:53.693: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:41:53.693: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:41:55.703: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:41:55.703: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:41:55.704: INFO: ExecWithOptions: Clientset creation Jan 14 12:41:55.704: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:42:10.870: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:42:10.870: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:42:12.876: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:42:12.877: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:42:12.878: INFO: ExecWithOptions: Clientset creation Jan 14 12:42:12.879: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:42:28.038: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:42:28.038: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:42:30.044: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:42:30.044: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:42:30.046: INFO: ExecWithOptions: Clientset creation Jan 14 12:42:30.046: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:42:45.188: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:42:45.189: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:42:47.195: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:42:47.195: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:42:47.196: INFO: ExecWithOptions: Clientset creation Jan 14 12:42:47.196: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:43:02.370: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:43:02.370: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:43:04.374: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:43:04.374: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:43:04.375: INFO: ExecWithOptions: Clientset creation Jan 14 12:43:04.375: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:43:19.454: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:43:19.454: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:43:21.458: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:43:21.458: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:43:21.459: INFO: ExecWithOptions: Clientset creation Jan 14 12:43:21.459: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:43:36.543: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:43:36.543: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:43:38.547: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:43:38.547: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:43:38.548: INFO: ExecWithOptions: Clientset creation Jan 14 12:43:38.548: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:43:53.633: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:43:53.633: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:43:55.638: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:43:55.638: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:43:55.639: INFO: ExecWithOptions: Clientset creation Jan 14 12:43:55.639: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:44:10.709: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:44:10.709: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:44:12.714: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:44:12.714: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:44:12.714: INFO: ExecWithOptions: Clientset creation Jan 14 12:44:12.714: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:44:27.793: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:44:27.793: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:44:29.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:44:29.798: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:44:29.799: INFO: ExecWithOptions: Clientset creation Jan 14 12:44:29.799: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:44:44.886: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:44:44.886: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:44:46.892: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:44:46.892: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:44:46.893: INFO: ExecWithOptions: Clientset creation Jan 14 12:44:46.893: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:45:02.013: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:45:02.013: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:45:04.018: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:45:04.018: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:45:04.019: INFO: ExecWithOptions: Clientset creation Jan 14 12:45:04.019: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:45:19.101: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:45:19.101: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:45:21.106: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:45:21.106: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:45:21.106: INFO: ExecWithOptions: Clientset creation Jan 14 12:45:21.107: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:45:36.182: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:45:36.182: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:45:38.186: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:45:38.186: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:45:38.187: INFO: ExecWithOptions: Clientset creation Jan 14 12:45:38.187: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:45:53.275: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:45:53.275: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:45:55.282: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:45:55.283: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:45:55.284: INFO: ExecWithOptions: Clientset creation Jan 14 12:45:55.284: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:46:10.382: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:46:10.382: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:46:12.386: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:46:12.386: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:46:12.387: INFO: ExecWithOptions: Clientset creation Jan 14 12:46:12.387: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:46:27.471: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:46:27.471: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:46:29.476: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:46:29.476: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:46:29.477: INFO: ExecWithOptions: Clientset creation Jan 14 12:46:29.477: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:46:44.554: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:46:44.554: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:46:46.559: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:46:46.559: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:46:46.560: INFO: ExecWithOptions: Clientset creation Jan 14 12:46:46.560: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:47:01.666: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:47:01.666: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:47:03.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:47:03.671: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:47:03.672: INFO: ExecWithOptions: Clientset creation Jan 14 12:47:03.672: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:47:18.764: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:47:18.764: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:47:20.769: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:47:20.769: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:47:20.770: INFO: ExecWithOptions: Clientset creation Jan 14 12:47:20.770: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:47:35.878: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:47:35.878: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:47:37.882: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:47:37.882: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:47:37.883: INFO: ExecWithOptions: Clientset creation Jan 14 12:47:37.883: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:47:52.952: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:47:52.952: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:47:54.957: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:47:54.957: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:47:54.958: INFO: ExecWithOptions: Clientset creation Jan 14 12:47:54.958: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:48:10.125: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:48:10.125: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:48:12.130: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:48:12.130: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:48:12.131: INFO: ExecWithOptions: Clientset creation Jan 14 12:48:12.131: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:48:27.211: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:48:27.211: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:48:29.216: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:48:29.217: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:48:29.217: INFO: ExecWithOptions: Clientset creation Jan 14 12:48:29.217: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:48:44.294: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:48:44.294: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:48:46.298: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:48:46.298: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:48:46.299: INFO: ExecWithOptions: Clientset creation Jan 14 12:48:46.299: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:49:01.386: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:49:01.386: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:49:03.390: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:49:03.391: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:49:03.391: INFO: ExecWithOptions: Clientset creation Jan 14 12:49:03.391: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:49:18.476: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:49:18.476: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:49:20.481: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:49:20.481: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:49:20.482: INFO: ExecWithOptions: Clientset creation Jan 14 12:49:20.482: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:49:35.560: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:49:35.560: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:49:37.564: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:49:37.565: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:49:37.565: INFO: ExecWithOptions: Clientset creation Jan 14 12:49:37.565: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:49:52.646: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:49:52.646: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:49:54.651: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:49:54.651: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:49:54.652: INFO: ExecWithOptions: Clientset creation Jan 14 12:49:54.652: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:50:09.729: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:50:09.729: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:50:11.733: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:50:11.733: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:50:11.734: INFO: ExecWithOptions: Clientset creation Jan 14 12:50:11.734: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:50:26.809: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:50:26.809: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:50:28.813: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:50:28.813: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:50:28.814: INFO: ExecWithOptions: Clientset creation Jan 14 12:50:28.814: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:50:43.895: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:50:43.896: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:50:45.899: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:50:45.899: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:50:45.900: INFO: ExecWithOptions: Clientset creation Jan 14 12:50:45.900: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:51:00.979: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:51:00.979: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:51:02.984: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:51:02.984: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:51:02.984: INFO: ExecWithOptions: Clientset creation Jan 14 12:51:02.985: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:51:18.078: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:51:18.078: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:51:20.083: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:51:20.083: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:51:20.084: INFO: ExecWithOptions: Clientset creation Jan 14 12:51:20.084: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:51:35.152: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:51:35.152: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:51:37.156: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:51:37.156: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:51:37.157: INFO: ExecWithOptions: Clientset creation Jan 14 12:51:37.157: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:51:52.237: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:51:52.237: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:51:54.242: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:51:54.242: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:51:54.242: INFO: ExecWithOptions: Clientset creation Jan 14 12:51:54.242: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:52:09.335: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:52:09.335: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:52:11.339: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:52:11.339: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:52:11.340: INFO: ExecWithOptions: Clientset creation Jan 14 12:52:11.340: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:52:26.427: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:52:26.427: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:52:28.431: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:52:28.431: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:52:28.432: INFO: ExecWithOptions: Clientset creation Jan 14 12:52:28.432: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:52:43.506: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:52:43.506: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:52:45.510: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7422 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:52:45.510: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:52:45.511: INFO: ExecWithOptions: Clientset creation Jan 14 12:52:45.511: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7422/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.7%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:53:00.645: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 14 12:53:00.645: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[]) Jan 14 12:53:02.646: INFO: Output of kubectl describe pod pod-network-test-7422/netserver-0: Jan 14 12:53:02.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7422 describe pod netserver-0 --namespace=pod-network-test-7422' Jan 14 12:53:02.861: INFO: stderr: "" Jan 14 12:53:02.861: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-7422\nPriority: 0\nNode: k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c/172.18.0.7\nStart Time: Sat, 14 Jan 2023 12:39:28 +0000\nLabels: selector-915e579d-aa1f-4028-ac2f-b047214980c4=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.9\nIPs:\n IP: 192.168.1.9\nContainers:\n webserver:\n Container ID: containerd://f9c20b2f86da2ff647e9da95d1d0359d0fdc3fad03da6d124447abcf70db9810\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sat, 14 Jan 2023 12:39:29 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j8tlh (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-j8tlh:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 13m default-scheduler Successfully assigned pod-network-test-7422/netserver-0 to k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c\n Normal Pulled 13m kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 13m kubelet Created container webserver\n Normal Started 13m kubelet Started container webserver\n" Jan 14 12:53:02.861: INFO: Name: netserver-0 Namespace: pod-network-test-7422 Priority: 0 Node: k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c/172.18.0.7 Start Time: Sat, 14 Jan 2023 12:39:28 +0000 Labels: selector-915e579d-aa1f-4028-ac2f-b047214980c4=true Annotations: <none> Status: Running IP: 192.168.1.9 IPs: IP: 192.168.1.9 Containers: webserver: Container ID: containerd://f9c20b2f86da2ff647e9da95d1d0359d0fdc3fad03da6d124447abcf70db9810 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sat, 14 Jan 2023 12:39:29 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j8tlh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-j8tlh: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 13m default-scheduler Successfully assigned pod-network-test-7422/netserver-0 to k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c Normal Pulled 13m kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 13m kubelet Created container webserver Normal Started 13m kubelet Started container webserver Jan 14 12:53:02.861: INFO: Output of kubectl describe pod pod-network-test-7422/netserver-1: Jan 14 12:53:02.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7422 describe pod netserver-1 --namespace=pod-network-test-7422' Jan 14 12:53:03.033: INFO: stderr: "" Jan 14 12:53:03.034: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-7422\nPriority: 0\nNode: k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk/172.18.0.4\nStart Time: Sat, 14 Jan 2023 12:39:28 +0000\nLabels: selector-915e579d-aa1f-4028-ac2f-b047214980c4=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.12\nIPs:\n IP: 192.168.0.12\nContainers:\n webserver:\n Container ID: containerd://1e6fe079790088c6c9c1be29894ec6df4916b3ea4325c066dfea3bd1bcb554d4\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sat, 14 Jan 2023 12:39:29 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5rtp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-z5rtp:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 13m default-scheduler Successfully assigned pod-network-test-7422/netserver-1 to k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk\n Normal Pulled 13m kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 13m kubelet Created container webserver\n Normal Started 13m kubelet Started container webserver\n" Jan 14 12:53:03.034: INFO: Name: netserver-1 Namespace: pod-network-test-7422 Priority: 0 Node: k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk/172.18.0.4 Start Time: Sat, 14 Jan 2023 12:39:28 +0000 Labels: selector-915e579d-aa1f-4028-ac2f-b047214980c4=true Annotations: <none> Status: Running IP: 192.168.0.12 IPs: IP: 192.168.0.12 Containers: webserver: Container ID: containerd://1e6fe079790088c6c9c1be29894ec6df4916b3ea4325c066dfea3bd1bcb554d4 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sat, 14 Jan 2023 12:39:29 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5rtp (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-z5rtp: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 13m default-scheduler Successfully assigned pod-network-test-7422/netserver-1 to k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk Normal Pulled 13m kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 13m kubelet Created container webserver Normal Started 13m kubelet Started container webserver Jan 14 12:53:03.034: INFO: Output of kubectl describe pod pod-network-test-7422/netserver-2: Jan 14 12:53:03.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7422 describe pod netserver-2 --namespace=pod-network-test-7422' Jan 14 12:53:03.177: INFO: stderr: "" Jan 14 12:53:03.177: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-7422\nPriority: 0\nNode: k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3/172.18.0.5\nStart Time: Sat, 14 Jan 2023 12:39:28 +0000\nLabels: selector-915e579d-aa1f-4028-ac2f-b047214980c4=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.12\nIPs:\n IP: 192.168.6.12\nContainers:\n webserver:\n Container ID: containerd://d68a705c1e7ed3c60a283ab78bd85fccab26f8ae2d35866376251c648c3110db\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sat, 14 Jan 2023 12:39:29 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp7cx (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-hp7cx:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 13m default-scheduler Successfully assigned pod-network-test-7422/netserver-2 to k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3\n Normal Pulled 13m kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 13m kubelet Created container webserver\n Normal Started 13m kubelet Started container webserver\n" Jan 14 12:53:03.177: INFO: Name: netserver-2 Namespace: pod-network-test-7422 Priority: 0 Node: k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3/172.18.0.5 Start Time: Sat, 14 Jan 2023 12:39:28 +0000 Labels: selector-915e579d-aa1f-4028-ac2f-b047214980c4=true Annotations: <none> Status: Running IP: 192.168.6.12 IPs: IP: 192.168.6.12 Containers: webserver: Container ID: containerd://d68a705c1e7ed3c60a283ab78bd85fccab26f8ae2d35866376251c648c3110db Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sat, 14 Jan 2023 12:39:29 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp7cx (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-hp7cx: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 13m default-scheduler Successfully assigned pod-network-test-7422/netserver-2 to k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 Normal Pulled 13m kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 13m kubelet Created container webserver Normal Started 13m kubelet Started container webserver Jan 14 12:53:03.177: INFO: Output of kubectl describe pod pod-network-test-7422/netserver-3: Jan 14 12:53:03.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7422 describe pod netserver-3 --namespace=pod-network-test-7422' Jan 14 12:53:03.332: INFO: stderr: "" Jan 14 12:53:03.332: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-7422\nPriority: 0\nNode: k8s-upgrade-and-conformance-ihjwwi-worker-g557ne/172.18.0.6\nStart Time: Sat, 14 Jan 2023 12:39:28 +0000\nLabels: selector-915e579d-aa1f-4028-ac2f-b047214980c4=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.7\nIPs:\n IP: 192.168.2.7\nContainers:\n webserver:\n Container ID: containerd://aaa2bc37322b95e04fe23b498b419a00f8599a2ec8b6a9e96dadc289c4622270\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sat, 14 Jan 2023 12:39:30 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l8727 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-l8727:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-ihjwwi-worker-g557ne\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 13m default-scheduler Successfully assigned pod-network-test-7422/netserver-3 to k8s-upgrade-and-conformance-ihjwwi-worker-g557ne\n Normal Pulled 13m kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 13m kubelet Created container webserver\n Normal Started 13m kubelet Started container webserver\n" Jan 14 12:53:03.332: INFO: Name: netserver-3 Namespace: pod-network-test-7422 Priority: 0 Node: k8s-upgrade-and-conformance-ihjwwi-worker-g557ne/172.18.0.6 Start Time: Sat, 14 Jan 2023 12:39:28 +0000 Labels: selector-915e579d-aa1f-4028-ac2f-b047214980c4=true Annotations: <none> Status: Running IP: 192.168.2.7 IPs: IP: 192.168.2.7 Containers: webserver: Container ID: containerd://aaa2bc37322b95e04fe23b498b419a00f8599a2ec8b6a9e96dadc289c4622270 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sat, 14 Jan 2023 12:39:30 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l8727 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-l8727: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-ihjwwi-worker-g557ne Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 13m default-scheduler Successfully assigned pod-network-test-7422/netserver-3 to k8s-upgrade-and-conformance-ihjwwi-worker-g557ne Normal Pulled 13m kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 13m kubelet Created container webserver Normal Started 13m kubelet Started container webserver Jan 14 12:53:03.332: FAIL: Error dialing HTTP node to pod failed to find expected endpoints, tries 46 Command curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName retrieved map[] expected map[netserver-3:{}] Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25634d7?) test/e2e/e2e.go:130 +0x686 k8s.io/kubernetes/test/e2e.TestE2E(0x24d4cd9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000d6e4e0, 0x73bdd00) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f [AfterEach] [sig-network] Networking test/e2e/framework/framework.go:188 Jan 14 12:53:03.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-7422" for this suite. �[91m�[1m• Failure [815.309 seconds]�[0m [sig-network] Networking �[90mtest/e2e/common/network/framework.go:23�[0m Granular Checks: Pods �[90mtest/e2e/common/network/networking.go:32�[0m �[91m�[1mshould function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [It]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 14 12:53:03.332: Error dialing HTTP node to pod failed to find expected endpoints, tries 46 Command curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.7:8083/hostName retrieved map[] expected map[netserver-3:{}]�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:01.338: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not conflict [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:53:01.401: INFO: The status of Pod pod-secrets-4a2f3a34-1272-48d5-9c67-6d9cd157d2de is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:53:03.406: INFO: The status of Pod pod-secrets-4a2f3a34-1272-48d5-9c67-6d9cd157d2de is Running (Ready = true) �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:188 Jan 14 12:53:03.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-930" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":497,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} [BeforeEach] [sig-network] Networking test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:03.362: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-1614 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 14 12:53:03.400: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 14 12:53:03.489: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:53:05.495: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:53:07.496: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:53:09.493: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:53:11.495: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:53:13.494: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:53:15.496: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:53:17.494: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:53:19.497: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:53:21.496: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:53:23.496: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 14 12:53:25.495: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 14 12:53:25.505: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 14 12:53:25.513: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 14 12:53:25.524: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 14 12:53:27.570: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 14 12:53:27.570: INFO: Going to poll 192.168.1.60 on port 8083 at least 0 times, with a maximum of 46 tries before failing Jan 14 12:53:27.575: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.1.60:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1614 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:53:27.575: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:53:27.576: INFO: ExecWithOptions: Clientset creation Jan 14 12:53:27.576: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1614/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.1.60%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:53:27.701: INFO: Found all 1 expected endpoints: [netserver-0] Jan 14 12:53:27.701: INFO: Going to poll 192.168.0.61 on port 8083 at least 0 times, with a maximum of 46 tries before failing Jan 14 12:53:27.711: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.0.61:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1614 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:53:27.711: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:53:27.713: INFO: ExecWithOptions: Clientset creation Jan 14 12:53:27.713: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1614/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.0.61%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:53:27.835: INFO: Found all 1 expected endpoints: [netserver-1] Jan 14 12:53:27.836: INFO: Going to poll 192.168.6.64 on port 8083 at least 0 times, with a maximum of 46 tries before failing Jan 14 12:53:27.841: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.6.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1614 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:53:27.841: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:53:27.842: INFO: ExecWithOptions: Clientset creation Jan 14 12:53:27.842: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1614/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.6.64%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:53:27.969: INFO: Found all 1 expected endpoints: [netserver-2] Jan 14 12:53:27.969: INFO: Going to poll 192.168.2.68 on port 8083 at least 0 times, with a maximum of 46 tries before failing Jan 14 12:53:27.974: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.68:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1614 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:53:27.974: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:53:27.976: INFO: ExecWithOptions: Clientset creation Jan 14 12:53:27.976: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1614/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.68%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 14 12:53:28.103: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking test/e2e/framework/framework.go:188 Jan 14 12:53:28.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-1614" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":497,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:28.142: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-ea1b5d28-41e1-4e89-81c2-eb99b36e8e1b �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 14 12:53:28.192: INFO: Waiting up to 5m0s for pod "pod-configmaps-85c08c22-ebb1-4647-bb34-a0dfeb12e1e2" in namespace "configmap-3579" to be "Succeeded or Failed" Jan 14 12:53:28.198: INFO: Pod "pod-configmaps-85c08c22-ebb1-4647-bb34-a0dfeb12e1e2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.338454ms Jan 14 12:53:30.204: INFO: Pod "pod-configmaps-85c08c22-ebb1-4647-bb34-a0dfeb12e1e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011225992s Jan 14 12:53:32.210: INFO: Pod "pod-configmaps-85c08c22-ebb1-4647-bb34-a0dfeb12e1e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017974434s �[1mSTEP�[0m: Saw pod success Jan 14 12:53:32.210: INFO: Pod "pod-configmaps-85c08c22-ebb1-4647-bb34-a0dfeb12e1e2" satisfied condition "Succeeded or Failed" Jan 14 12:53:32.215: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod pod-configmaps-85c08c22-ebb1-4647-bb34-a0dfeb12e1e2 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:53:32.239: INFO: Waiting for pod pod-configmaps-85c08c22-ebb1-4647-bb34-a0dfeb12e1e2 to disappear Jan 14 12:53:32.245: INFO: Pod pod-configmaps-85c08c22-ebb1-4647-bb34-a0dfeb12e1e2 no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 Jan 14 12:53:32.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3579" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":505,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:52:39.528: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container test/e2e/common/node/container_probe.go:61 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 [AfterEach] [sig-node] Probing container test/e2e/framework/framework.go:188 Jan 14 12:53:39.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3651" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":223,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] server version test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:39.613: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should find the server version [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Jan 14 12:53:39.649: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Jan 14 12:53:39.649: INFO: cleanMinorVersion: 24 Jan 14 12:53:39.649: INFO: Minor version: 24 [AfterEach] [sig-api-machinery] server version test/e2e/framework/framework.go:188 Jan 14 12:53:39.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-7818" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":17,"skipped":236,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:39.685: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on node default medium Jan 14 12:53:39.729: INFO: Waiting up to 5m0s for pod "pod-39263962-f2f5-47cd-9a44-1eaf6fa1fe6b" in namespace "emptydir-5415" to be "Succeeded or Failed" Jan 14 12:53:39.735: INFO: Pod "pod-39263962-f2f5-47cd-9a44-1eaf6fa1fe6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178755ms Jan 14 12:53:41.742: INFO: Pod "pod-39263962-f2f5-47cd-9a44-1eaf6fa1fe6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013524986s Jan 14 12:53:43.748: INFO: Pod "pod-39263962-f2f5-47cd-9a44-1eaf6fa1fe6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019451797s �[1mSTEP�[0m: Saw pod success Jan 14 12:53:43.748: INFO: Pod "pod-39263962-f2f5-47cd-9a44-1eaf6fa1fe6b" satisfied condition "Succeeded or Failed" Jan 14 12:53:43.755: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk pod pod-39263962-f2f5-47cd-9a44-1eaf6fa1fe6b container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:53:43.798: INFO: Waiting for pod pod-39263962-f2f5-47cd-9a44-1eaf6fa1fe6b to disappear Jan 14 12:53:43.800: INFO: Pod pod-39263962-f2f5-47cd-9a44-1eaf6fa1fe6b no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 Jan 14 12:53:43.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5415" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":244,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/common/node/sysctl.go:37 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:43.859: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/common/node/sysctl.go:67 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] test/e2e/framework/framework.go:188 Jan 14 12:53:43.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-8876" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":19,"skipped":261,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:43.962: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:89 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 14 12:53:44.363: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 14 12:53:47.401: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that should be mutated �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jan 14 12:53:47.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-352" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-352-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:104 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":20,"skipped":278,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:47.841: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 14 12:53:47.889: INFO: Waiting up to 5m0s for pod "pod-54dcd65c-84b6-4e95-924c-bffe00832061" in namespace "emptydir-1965" to be "Succeeded or Failed" Jan 14 12:53:47.898: INFO: Pod "pod-54dcd65c-84b6-4e95-924c-bffe00832061": Phase="Pending", Reason="", readiness=false. Elapsed: 8.931512ms Jan 14 12:53:49.905: INFO: Pod "pod-54dcd65c-84b6-4e95-924c-bffe00832061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016613741s Jan 14 12:53:51.912: INFO: Pod "pod-54dcd65c-84b6-4e95-924c-bffe00832061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023590278s �[1mSTEP�[0m: Saw pod success Jan 14 12:53:51.912: INFO: Pod "pod-54dcd65c-84b6-4e95-924c-bffe00832061" satisfied condition "Succeeded or Failed" Jan 14 12:53:51.917: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk pod pod-54dcd65c-84b6-4e95-924c-bffe00832061 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:53:51.939: INFO: Waiting for pod pod-54dcd65c-84b6-4e95-924c-bffe00832061 to disappear Jan 14 12:53:51.944: INFO: Pod pod-54dcd65c-84b6-4e95-924c-bffe00832061 no longer exists [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/framework.go:188 Jan 14 12:53:51.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-1965" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":305,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:51.980: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-d583d61b-538f-444d-880c-26f09c6dbb96 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Waiting for pod with text data �[1mSTEP�[0m: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 Jan 14 12:53:56.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4566" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":311,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:56.193: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-b54edb9d-b7ec-4067-a68c-940489f4f657 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 14 12:53:56.256: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-22ada580-867c-43c3-a14a-c6207c185f83" in namespace "projected-2330" to be "Succeeded or Failed" Jan 14 12:53:56.264: INFO: Pod "pod-projected-configmaps-22ada580-867c-43c3-a14a-c6207c185f83": Phase="Pending", Reason="", readiness=false. Elapsed: 7.494942ms Jan 14 12:53:58.272: INFO: Pod "pod-projected-configmaps-22ada580-867c-43c3-a14a-c6207c185f83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015558463s Jan 14 12:54:00.279: INFO: Pod "pod-projected-configmaps-22ada580-867c-43c3-a14a-c6207c185f83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02317261s �[1mSTEP�[0m: Saw pod success Jan 14 12:54:00.279: INFO: Pod "pod-projected-configmaps-22ada580-867c-43c3-a14a-c6207c185f83" satisfied condition "Succeeded or Failed" Jan 14 12:54:00.284: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk pod pod-projected-configmaps-22ada580-867c-43c3-a14a-c6207c185f83 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:54:00.310: INFO: Waiting for pod pod-projected-configmaps-22ada580-867c-43c3-a14a-c6207c185f83 to disappear Jan 14 12:54:00.314: INFO: Pod pod-projected-configmaps-22ada580-867c-43c3-a14a-c6207c185f83 no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 Jan 14 12:54:00.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2330" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":343,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:32.287: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP�[0m: Creating service test in namespace statefulset-3318 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Initializing watcher for selector baz=blah,foo=bar �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-3318 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-3318 Jan 14 12:53:32.349: INFO: Found 0 stateful pods, waiting for 1 Jan 14 12:53:42.358: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 14 12:53:42.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3318 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 14 12:53:42.680: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 14 12:53:42.680: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 14 12:53:42.680: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 14 12:53:42.690: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 14 12:53:52.697: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 14 12:53:52.697: INFO: Waiting for statefulset status.replicas updated to 0 Jan 14 12:53:52.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999956s Jan 14 12:53:53.750: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99175297s Jan 14 12:53:54.758: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982939939s Jan 14 12:53:55.765: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.97484836s Jan 14 12:53:56.773: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.967176025s Jan 14 12:53:57.779: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.959686909s Jan 14 12:53:58.791: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.952418872s Jan 14 12:53:59.799: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.941594318s Jan 14 12:54:00.809: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.934096779s Jan 14 12:54:01.816: INFO: Verifying statefulset ss doesn't scale past 1 for another 923.575252ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3318 Jan 14 12:54:02.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3318 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 14 12:54:03.178: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 14 12:54:03.178: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 14 12:54:03.178: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 14 12:54:03.186: INFO: Found 1 stateful pods, waiting for 3 Jan 14 12:54:13.194: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 14 12:54:13.194: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 14 12:54:13.194: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Verifying that stateful set ss was scaled up in order �[1mSTEP�[0m: Scale down will halt with unhealthy stateful pod Jan 14 12:54:13.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3318 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 14 12:54:13.483: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 14 12:54:13.483: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 14 12:54:13.483: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 14 12:54:13.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3318 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 14 12:54:13.814: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 14 12:54:13.814: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 14 12:54:13.814: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 14 12:54:13.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3318 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 14 12:54:14.110: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 14 12:54:14.110: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 14 12:54:14.110: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 14 12:54:14.110: INFO: Waiting for statefulset status.replicas updated to 0 Jan 14 12:54:14.117: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 14 12:54:24.129: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 14 12:54:24.129: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 14 12:54:24.129: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 14 12:54:24.151: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999649s Jan 14 12:54:25.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992537935s Jan 14 12:54:26.169: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981556954s Jan 14 12:54:27.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975041032s Jan 14 12:54:28.185: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.967385318s Jan 14 12:54:29.192: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.959520755s Jan 14 12:54:30.198: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.95184462s Jan 14 12:54:31.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.945765656s Jan 14 12:54:32.214: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.938001429s Jan 14 12:54:33.220: INFO: Verifying statefulset ss doesn't scale past 3 for another 930.293846ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3318 Jan 14 12:54:34.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3318 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 14 12:54:34.501: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 14 12:54:34.501: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 14 12:54:34.501: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 14 12:54:34.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3318 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 14 12:54:34.800: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 14 12:54:34.800: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 14 12:54:34.800: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 14 12:54:34.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3318 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 14 12:54:35.066: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 14 12:54:35.066: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 14 12:54:35.066: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 14 12:54:35.066: INFO: Scaling statefulset ss to 0 �[1mSTEP�[0m: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 14 12:54:45.099: INFO: Deleting all statefulset in ns statefulset-3318 Jan 14 12:54:45.105: INFO: Scaling statefulset ss to 0 Jan 14 12:54:45.124: INFO: Waiting for statefulset status.replicas updated to 0 Jan 14 12:54:45.128: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:188 Jan 14 12:54:45.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-3318" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":22,"skipped":515,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:54:00.460: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container test/e2e/common/node/container_probe.go:61 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating pod busybox-e4e4d6a4-fd45-4910-9477-b0d17d28069c in namespace container-probe-7034 Jan 14 12:54:02.526: INFO: Started pod busybox-e4e4d6a4-fd45-4910-9477-b0d17d28069c in namespace container-probe-7034 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 14 12:54:02.531: INFO: Initial restart count of pod busybox-e4e4d6a4-fd45-4910-9477-b0d17d28069c is 0 Jan 14 12:54:52.722: INFO: Restart count of pod container-probe-7034/busybox-e4e4d6a4-fd45-4910-9477-b0d17d28069c is now 1 (50.190403002s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container test/e2e/framework/framework.go:188 Jan 14 12:54:52.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-7034" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":396,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:54:52.887: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics Jan 14 12:54:54.096: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-ihjwwi-wgnbq-d4k98 is Running (Ready = true) Jan 14 12:54:54.296: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 14 12:54:54.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-9991" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:54:45.214: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController test/e2e/apps/disruption.go:71 [It] should block an eviction until the PDB is updated to allow it [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pdb that targets all three pods in a test replica set �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: First trying to evict a pod which shouldn't be evictable �[1mSTEP�[0m: Waiting for all pods to be running Jan 14 12:54:47.278: INFO: pods: 0 < 3 Jan 14 12:54:49.286: INFO: running pods: 1 < 3 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Updating the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: Waiting for the pdb to observed all healthy pods �[1mSTEP�[0m: Patching the pdb to disallow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Deleting the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be deleted �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController test/e2e/framework/framework.go:188 Jan 14 12:54:55.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-6324" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":23,"skipped":535,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:54:55.641: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API test/e2e/instrumentation/events.go:84 [It] should delete a collection of events [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Create set of events �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete a list of events Jan 14 12:54:55.705: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API test/e2e/framework/framework.go:188 Jan 14 12:54:55.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-9055" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":24,"skipped":561,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":25,"skipped":441,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:54:54.322: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:191 [It] should support remote command execution over websockets [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:54:54.371: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 14 12:54:54.398: INFO: The status of Pod pod-exec-websocket-c9e72591-523b-4c03-8c91-7112c3cf7851 is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:54:56.405: INFO: The status of Pod pod-exec-websocket-c9e72591-523b-4c03-8c91-7112c3cf7851 is Running (Ready = true) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:188 Jan 14 12:54:56.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-9840" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":441,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:54:56.571: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating a ServiceAccount �[1mSTEP�[0m: watching for the ServiceAccount to be added �[1mSTEP�[0m: patching the ServiceAccount �[1mSTEP�[0m: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) �[1mSTEP�[0m: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:188 Jan 14 12:54:56.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-5250" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":27,"skipped":448,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Discovery test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:54:55.791: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename discovery �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery test/e2e/apimachinery/discovery.go:43 �[1mSTEP�[0m: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:54:56.773: INFO: Checking APIGroup: apiregistration.k8s.io Jan 14 12:54:56.777: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jan 14 12:54:56.777: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] Jan 14 12:54:56.777: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jan 14 12:54:56.777: INFO: Checking APIGroup: apps Jan 14 12:54:56.779: INFO: PreferredVersion.GroupVersion: apps/v1 Jan 14 12:54:56.779: INFO: Versions found [{apps/v1 v1}] Jan 14 12:54:56.779: INFO: apps/v1 matches apps/v1 Jan 14 12:54:56.779: INFO: Checking APIGroup: events.k8s.io Jan 14 12:54:56.780: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jan 14 12:54:56.782: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jan 14 12:54:56.782: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jan 14 12:54:56.782: INFO: Checking APIGroup: authentication.k8s.io Jan 14 12:54:56.784: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jan 14 12:54:56.784: INFO: Versions found [{authentication.k8s.io/v1 v1}] Jan 14 12:54:56.784: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jan 14 12:54:56.784: INFO: Checking APIGroup: authorization.k8s.io Jan 14 12:54:56.786: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jan 14 12:54:56.786: INFO: Versions found [{authorization.k8s.io/v1 v1}] Jan 14 12:54:56.786: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jan 14 12:54:56.786: INFO: Checking APIGroup: autoscaling Jan 14 12:54:56.792: INFO: PreferredVersion.GroupVersion: autoscaling/v2 Jan 14 12:54:56.792: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jan 14 12:54:56.792: INFO: autoscaling/v2 matches autoscaling/v2 Jan 14 12:54:56.792: INFO: Checking APIGroup: batch Jan 14 12:54:56.794: INFO: PreferredVersion.GroupVersion: batch/v1 Jan 14 12:54:56.794: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jan 14 12:54:56.794: INFO: batch/v1 matches batch/v1 Jan 14 12:54:56.794: INFO: Checking APIGroup: certificates.k8s.io Jan 14 12:54:56.796: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jan 14 12:54:56.796: INFO: Versions found [{certificates.k8s.io/v1 v1}] Jan 14 12:54:56.796: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jan 14 12:54:56.796: INFO: Checking APIGroup: networking.k8s.io Jan 14 12:54:56.797: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jan 14 12:54:56.797: INFO: Versions found [{networking.k8s.io/v1 v1}] Jan 14 12:54:56.797: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jan 14 12:54:56.797: INFO: Checking APIGroup: policy Jan 14 12:54:56.800: INFO: PreferredVersion.GroupVersion: policy/v1 Jan 14 12:54:56.800: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Jan 14 12:54:56.800: INFO: policy/v1 matches policy/v1 Jan 14 12:54:56.800: INFO: Checking APIGroup: rbac.authorization.k8s.io Jan 14 12:54:56.802: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jan 14 12:54:56.802: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] Jan 14 12:54:56.802: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jan 14 12:54:56.802: INFO: Checking APIGroup: storage.k8s.io Jan 14 12:54:56.805: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jan 14 12:54:56.805: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jan 14 12:54:56.806: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jan 14 12:54:56.806: INFO: Checking APIGroup: admissionregistration.k8s.io Jan 14 12:54:56.808: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jan 14 12:54:56.808: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] Jan 14 12:54:56.808: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jan 14 12:54:56.808: INFO: Checking APIGroup: apiextensions.k8s.io Jan 14 12:54:56.812: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jan 14 12:54:56.812: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] Jan 14 12:54:56.812: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jan 14 12:54:56.812: INFO: Checking APIGroup: scheduling.k8s.io Jan 14 12:54:56.817: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jan 14 12:54:56.817: INFO: Versions found [{scheduling.k8s.io/v1 v1}] Jan 14 12:54:56.817: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jan 14 12:54:56.817: INFO: Checking APIGroup: coordination.k8s.io Jan 14 12:54:56.818: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jan 14 12:54:56.818: INFO: Versions found [{coordination.k8s.io/v1 v1}] Jan 14 12:54:56.818: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jan 14 12:54:56.818: INFO: Checking APIGroup: node.k8s.io Jan 14 12:54:56.822: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jan 14 12:54:56.822: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jan 14 12:54:56.822: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jan 14 12:54:56.822: INFO: Checking APIGroup: discovery.k8s.io Jan 14 12:54:56.823: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Jan 14 12:54:56.823: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Jan 14 12:54:56.823: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Jan 14 12:54:56.823: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jan 14 12:54:56.828: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 Jan 14 12:54:56.828: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jan 14 12:54:56.828: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 [AfterEach] [sig-api-machinery] Discovery test/e2e/framework/framework.go:188 Jan 14 12:54:56.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "discovery-8166" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":25,"skipped":572,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:54:56.898: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume test/e2e/common/storage/downwardapi_volume.go:43 [It] should provide podname only [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 14 12:54:56.941: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46c73521-1c68-41e2-8394-36b4a64ee9cf" in namespace "downward-api-6256" to be "Succeeded or Failed" Jan 14 12:54:56.945: INFO: Pod "downwardapi-volume-46c73521-1c68-41e2-8394-36b4a64ee9cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276804ms Jan 14 12:54:58.952: INFO: Pod "downwardapi-volume-46c73521-1c68-41e2-8394-36b4a64ee9cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010989718s Jan 14 12:55:00.960: INFO: Pod "downwardapi-volume-46c73521-1c68-41e2-8394-36b4a64ee9cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019203257s �[1mSTEP�[0m: Saw pod success Jan 14 12:55:00.961: INFO: Pod "downwardapi-volume-46c73521-1c68-41e2-8394-36b4a64ee9cf" satisfied condition "Succeeded or Failed" Jan 14 12:55:00.979: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod downwardapi-volume-46c73521-1c68-41e2-8394-36b4a64ee9cf container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:55:01.015: INFO: Waiting for pod downwardapi-volume-46c73521-1c68-41e2-8394-36b4a64ee9cf to disappear Jan 14 12:55:01.033: INFO: Pod downwardapi-volume-46c73521-1c68-41e2-8394-36b4a64ee9cf no longer exists [AfterEach] [sig-storage] Downward API volume test/e2e/framework/framework.go:188 Jan 14 12:55:01.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6256" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":594,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:54:56.736: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount an API token into pods [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: reading a file in the container Jan 14 12:55:00.818: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1115 pod-service-account-a4f00f43-8ce3-4198-8308-98e6295c0694 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' �[1mSTEP�[0m: reading a file in the container Jan 14 12:55:01.199: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1115 pod-service-account-a4f00f43-8ce3-4198-8308-98e6295c0694 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' �[1mSTEP�[0m: reading a file in the container Jan 14 12:55:01.486: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1115 pod-service-account-a4f00f43-8ce3-4198-8308-98e6295c0694 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' Jan 14 12:55:01.852: INFO: Got root ca configmap in namespace "svcaccounts-1115" [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:188 Jan 14 12:55:01.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-1115" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":28,"skipped":467,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:02.056: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should patch a secret [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating a secret �[1mSTEP�[0m: listing secrets in all namespaces to ensure that there are more than zero �[1mSTEP�[0m: patching the secret �[1mSTEP�[0m: deleting the secret using a LabelSelector �[1mSTEP�[0m: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets test/e2e/framework/framework.go:188 Jan 14 12:55:02.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1521" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":29,"skipped":508,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:02.262: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CronJob API operations [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 14 12:55:02.355: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Jan 14 12:55:02.370: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 14 12:55:02.407: INFO: waiting for watch events with expected annotations Jan 14 12:55:02.407: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:188 Jan 14 12:55:02.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-6131" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":30,"skipped":532,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:01.087: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-82fd9ed6-bba6-47b8-ab49-24b562faf205 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 14 12:55:01.159: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1e6dd2ad-8c38-43b9-a4d4-1060b0118c16" in namespace "projected-4305" to be "Succeeded or Failed" Jan 14 12:55:01.168: INFO: Pod "pod-projected-secrets-1e6dd2ad-8c38-43b9-a4d4-1060b0118c16": Phase="Pending", Reason="", readiness=false. Elapsed: 9.352154ms Jan 14 12:55:03.176: INFO: Pod "pod-projected-secrets-1e6dd2ad-8c38-43b9-a4d4-1060b0118c16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016919563s Jan 14 12:55:05.183: INFO: Pod "pod-projected-secrets-1e6dd2ad-8c38-43b9-a4d4-1060b0118c16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023836168s �[1mSTEP�[0m: Saw pod success Jan 14 12:55:05.183: INFO: Pod "pod-projected-secrets-1e6dd2ad-8c38-43b9-a4d4-1060b0118c16" satisfied condition "Succeeded or Failed" Jan 14 12:55:05.188: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod pod-projected-secrets-1e6dd2ad-8c38-43b9-a4d4-1060b0118c16 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:55:05.218: INFO: Waiting for pod pod-projected-secrets-1e6dd2ad-8c38-43b9-a4d4-1060b0118c16 to disappear Jan 14 12:55:05.223: INFO: Pod pod-projected-secrets-1e6dd2ad-8c38-43b9-a4d4-1060b0118c16 no longer exists [AfterEach] [sig-storage] Projected secret test/e2e/framework/framework.go:188 Jan 14 12:55:05.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4305" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":600,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:05.257: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should replace a pod template [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Create a pod template �[1mSTEP�[0m: Replace a pod template Jan 14 12:55:05.313: INFO: Found updated podtemplate annotation: "true" [AfterEach] [sig-node] PodTemplates test/e2e/framework/framework.go:188 Jan 14 12:55:05.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-3257" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should replace a pod template [Conformance]","total":-1,"completed":28,"skipped":606,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:02.559: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:245 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:55:02.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9330 create -f -' Jan 14 12:55:04.799: INFO: stderr: "" Jan 14 12:55:04.799: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 14 12:55:04.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9330 create -f -' Jan 14 12:55:05.420: INFO: stderr: "" Jan 14 12:55:05.420: INFO: stdout: "service/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 14 12:55:06.427: INFO: Selector matched 1 pods for map[app:agnhost] Jan 14 12:55:06.427: INFO: Found 0 / 1 Jan 14 12:55:07.428: INFO: Selector matched 1 pods for map[app:agnhost] Jan 14 12:55:07.428: INFO: Found 1 / 1 Jan 14 12:55:07.428: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 14 12:55:07.433: INFO: Selector matched 1 pods for map[app:agnhost] Jan 14 12:55:07.433: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 14 12:55:07.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9330 describe pod agnhost-primary-w589m' Jan 14 12:55:07.643: INFO: stderr: "" Jan 14 12:55:07.643: INFO: stdout: "Name: agnhost-primary-w589m\nNamespace: kubectl-9330\nPriority: 0\nNode: k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3/172.18.0.5\nStart Time: Sat, 14 Jan 2023 12:55:04 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.67\nIPs:\n IP: 192.168.6.67\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://cc5927f726dc75099f0297f59c3c47067eea4ecbcb0f9e86d109e33a2a0a137b\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 14 Jan 2023 12:55:05 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2pg9g (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-2pg9g:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-9330/agnhost-primary-w589m to k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Jan 14 12:55:07.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9330 describe rc agnhost-primary' Jan 14 12:55:07.828: INFO: stderr: "" Jan 14 12:55:07.828: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9330\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-w589m\n" Jan 14 12:55:07.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9330 describe service agnhost-primary' Jan 14 12:55:07.999: INFO: stderr: "" Jan 14 12:55:07.999: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9330\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.138.110.30\nIPs: 10.138.110.30\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.6.67:6379\nSession Affinity: None\nEvents: <none>\n" Jan 14 12:55:08.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9330 describe node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c' Jan 14 12:55:08.263: INFO: stderr: "" Jan 14 12:55:08.263: INFO: stdout: "Name: k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c\nRoles: <none>\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c\n kubernetes.io/os=linux\nAnnotations: cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-ihjwwi\n cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-rev1cp\n cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c\n cluster.x-k8s.io/owner-kind: MachineSet\n cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856\n kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 14 Jan 2023 12:35:02 +0000\nTaints: <none>\nUnschedulable: false\nLease:\n HolderIdentity: k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c\n AcquireTime: <unset>\n RenewTime: Sat, 14 Jan 2023 12:55:05 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 14 Jan 2023 12:52:04 +0000 Sat, 14 Jan 2023 12:35:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 14 Jan 2023 12:52:04 +0000 Sat, 14 Jan 2023 12:35:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 14 Jan 2023 12:52:04 +0000 Sat, 14 Jan 2023 12:35:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 14 Jan 2023 12:52:04 +0000 Sat, 14 Jan 2023 12:35:53 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.7\n Hostname: k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c\nCapacity:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nAllocatable:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nSystem Info:\n Machine ID: e7db28489ec14629a02e997542de31c6\n System UUID: 2b86c222-33c4-47cc-a6f1-7a64d6b7f1cb\n Boot ID: 135ba932-745c-4208-8c73-c3b0f43f0177\n Kernel Version: 5.4.0-1081-gke\n OS Image: Ubuntu 21.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.4\n Kubelet Version: v1.24.9\n Kube-Proxy Version: v1.24.9\nPodCIDR: 192.168.1.0/24\nPodCIDRs: 192.168.1.0/24\nProviderID: docker:////k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c\nNon-terminated Pods: (5 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-bd6b6df9f-t5mmm 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 17m\n kube-system kindnet-j9tdl 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 20m\n kube-system kube-proxy-9fq77 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m\n services-5166 affinity-nodeport-transition-hltjx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m28s\n services-5166 execpod-affinityqnvbc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m25s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 200m (2%) 100m (1%)\n memory 120Mi (0%) 220Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 19m kube-proxy \n Normal RegisteredNode 20m node-controller Node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c event: Registered Node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-d9h8c in Controller\n" Jan 14 12:55:08.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9330 describe namespace kubectl-9330' Jan 14 12:55:08.419: INFO: stderr: "" Jan 14 12:55:08.419: INFO: stdout: "Name: kubectl-9330\nLabels: e2e-framework=kubectl\n e2e-run=5223e98c-552c-4e0b-b3f2-d556067b1314\n kubernetes.io/metadata.name=kubectl-9330\n pod-security.kubernetes.io/enforce=baseline\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:188 Jan 14 12:55:08.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9330" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":31,"skipped":540,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:05.398: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-5411a827-3f96-4984-99aa-9edd24eac51f �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 14 12:55:05.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-49f8a6e6-a6d7-44a3-9cb5-9f19520fefdc" in namespace "configmap-9643" to be "Succeeded or Failed" Jan 14 12:55:05.522: INFO: Pod "pod-configmaps-49f8a6e6-a6d7-44a3-9cb5-9f19520fefdc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.168488ms Jan 14 12:55:07.529: INFO: Pod "pod-configmaps-49f8a6e6-a6d7-44a3-9cb5-9f19520fefdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023911377s Jan 14 12:55:09.538: INFO: Pod "pod-configmaps-49f8a6e6-a6d7-44a3-9cb5-9f19520fefdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033741803s �[1mSTEP�[0m: Saw pod success Jan 14 12:55:09.539: INFO: Pod "pod-configmaps-49f8a6e6-a6d7-44a3-9cb5-9f19520fefdc" satisfied condition "Succeeded or Failed" Jan 14 12:55:09.544: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk pod pod-configmaps-49f8a6e6-a6d7-44a3-9cb5-9f19520fefdc container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:55:09.582: INFO: Waiting for pod pod-configmaps-49f8a6e6-a6d7-44a3-9cb5-9f19520fefdc to disappear Jan 14 12:55:09.587: INFO: Pod pod-configmaps-49f8a6e6-a6d7-44a3-9cb5-9f19520fefdc no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 Jan 14 12:55:09.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9643" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":629,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:09.661: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController test/e2e/apps/rc.go:56 [It] should test the lifecycle of a ReplicationController [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating a ReplicationController �[1mSTEP�[0m: waiting for RC to be added �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: patching ReplicationController �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: patching ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: fetching ReplicationController status �[1mSTEP�[0m: patching ReplicationController scale �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for ReplicationController's scale to be the max amount �[1mSTEP�[0m: fetching ReplicationController; ensuring that it's patched �[1mSTEP�[0m: updating ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: listing all ReplicationControllers �[1mSTEP�[0m: checking that ReplicationController has expected values �[1mSTEP�[0m: deleting ReplicationControllers by collection �[1mSTEP�[0m: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController test/e2e/framework/framework.go:188 Jan 14 12:55:13.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-9296" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":30,"skipped":649,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:08.482: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount projected service account token [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test service account token: Jan 14 12:55:08.532: INFO: Waiting up to 5m0s for pod "test-pod-da049f5b-316e-4b38-870d-00d0cfcb18b9" in namespace "svcaccounts-6263" to be "Succeeded or Failed" Jan 14 12:55:08.536: INFO: Pod "test-pod-da049f5b-316e-4b38-870d-00d0cfcb18b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229104ms Jan 14 12:55:10.542: INFO: Pod "test-pod-da049f5b-316e-4b38-870d-00d0cfcb18b9": Phase="Running", Reason="", readiness=true. Elapsed: 2.010202552s Jan 14 12:55:12.552: INFO: Pod "test-pod-da049f5b-316e-4b38-870d-00d0cfcb18b9": Phase="Running", Reason="", readiness=false. Elapsed: 4.01959588s Jan 14 12:55:14.558: INFO: Pod "test-pod-da049f5b-316e-4b38-870d-00d0cfcb18b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026153466s �[1mSTEP�[0m: Saw pod success Jan 14 12:55:14.558: INFO: Pod "test-pod-da049f5b-316e-4b38-870d-00d0cfcb18b9" satisfied condition "Succeeded or Failed" Jan 14 12:55:14.563: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod test-pod-da049f5b-316e-4b38-870d-00d0cfcb18b9 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:55:14.588: INFO: Waiting for pod test-pod-da049f5b-316e-4b38-870d-00d0cfcb18b9 to disappear Jan 14 12:55:14.592: INFO: Pod test-pod-da049f5b-316e-4b38-870d-00d0cfcb18b9 no longer exists [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:188 Jan 14 12:55:14.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-6263" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":32,"skipped":556,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:13.234: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 14 12:55:13.290: INFO: Waiting up to 5m0s for pod "downward-api-24d80b6f-9a31-4766-90dd-14f55e6aa710" in namespace "downward-api-6981" to be "Succeeded or Failed" Jan 14 12:55:13.298: INFO: Pod "downward-api-24d80b6f-9a31-4766-90dd-14f55e6aa710": Phase="Pending", Reason="", readiness=false. Elapsed: 7.828982ms Jan 14 12:55:15.320: INFO: Pod "downward-api-24d80b6f-9a31-4766-90dd-14f55e6aa710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029127195s Jan 14 12:55:17.325: INFO: Pod "downward-api-24d80b6f-9a31-4766-90dd-14f55e6aa710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03444394s �[1mSTEP�[0m: Saw pod success Jan 14 12:55:17.325: INFO: Pod "downward-api-24d80b6f-9a31-4766-90dd-14f55e6aa710" satisfied condition "Succeeded or Failed" Jan 14 12:55:17.332: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod downward-api-24d80b6f-9a31-4766-90dd-14f55e6aa710 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:55:17.367: INFO: Waiting for pod downward-api-24d80b6f-9a31-4766-90dd-14f55e6aa710 to disappear Jan 14 12:55:17.370: INFO: Pod downward-api-24d80b6f-9a31-4766-90dd-14f55e6aa710 no longer exists [AfterEach] [sig-node] Downward API test/e2e/framework/framework.go:188 Jan 14 12:55:17.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6981" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":677,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:14.653: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-cec3d87f-ad9a-4756-b4e9-69222c6d84d8 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 14 12:55:14.703: INFO: Waiting up to 5m0s for pod "pod-configmaps-8dbfb9da-a7d6-4a66-8a40-7471e2d906ee" in namespace "configmap-9385" to be "Succeeded or Failed" Jan 14 12:55:14.709: INFO: Pod "pod-configmaps-8dbfb9da-a7d6-4a66-8a40-7471e2d906ee": Phase="Pending", Reason="", readiness=false. Elapsed: 5.612449ms Jan 14 12:55:16.716: INFO: Pod "pod-configmaps-8dbfb9da-a7d6-4a66-8a40-7471e2d906ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012680434s Jan 14 12:55:18.724: INFO: Pod "pod-configmaps-8dbfb9da-a7d6-4a66-8a40-7471e2d906ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021208465s �[1mSTEP�[0m: Saw pod success Jan 14 12:55:18.725: INFO: Pod "pod-configmaps-8dbfb9da-a7d6-4a66-8a40-7471e2d906ee" satisfied condition "Succeeded or Failed" Jan 14 12:55:18.730: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod pod-configmaps-8dbfb9da-a7d6-4a66-8a40-7471e2d906ee container configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:55:18.776: INFO: Waiting for pod pod-configmaps-8dbfb9da-a7d6-4a66-8a40-7471e2d906ee to disappear Jan 14 12:55:18.785: INFO: Pod pod-configmaps-8dbfb9da-a7d6-4a66-8a40-7471e2d906ee no longer exists [AfterEach] [sig-storage] ConfigMap test/e2e/framework/framework.go:188 Jan 14 12:55:18.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9385" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":574,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] KubeletManagedEtcHosts test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:18.823: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename e2e-kubelet-etc-hosts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Setting up the test �[1mSTEP�[0m: Creating hostNetwork=false pod Jan 14 12:55:18.902: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:55:20.910: INFO: The status of Pod test-pod is Running (Ready = true) �[1mSTEP�[0m: Creating hostNetwork=true pod Jan 14 12:55:20.930: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jan 14 12:55:22.937: INFO: The status of Pod test-host-network-pod is Running (Ready = true) �[1mSTEP�[0m: Running the test �[1mSTEP�[0m: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 14 12:55:22.947: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6267 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:55:22.947: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:55:22.948: INFO: ExecWithOptions: Clientset creation Jan 14 12:55:22.948: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6267/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) Jan 14 12:55:23.131: INFO: Exec stderr: "" Jan 14 12:55:23.131: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6267 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:55:23.131: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:55:23.132: INFO: ExecWithOptions: Clientset creation Jan 14 12:55:23.132: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6267/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) Jan 14 12:55:23.246: INFO: Exec stderr: "" Jan 14 12:55:23.246: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6267 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:55:23.246: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:55:23.247: INFO: ExecWithOptions: Clientset creation Jan 14 12:55:23.247: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6267/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) Jan 14 12:55:23.359: INFO: Exec stderr: "" Jan 14 12:55:23.359: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6267 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:55:23.359: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:55:23.360: INFO: ExecWithOptions: Clientset creation Jan 14 12:55:23.360: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6267/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) Jan 14 12:55:23.483: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 14 12:55:23.483: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6267 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:55:23.483: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:55:23.485: INFO: ExecWithOptions: Clientset creation Jan 14 12:55:23.485: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6267/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) Jan 14 12:55:23.616: INFO: Exec stderr: "" Jan 14 12:55:23.616: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6267 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:55:23.616: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:55:23.617: INFO: ExecWithOptions: Clientset creation Jan 14 12:55:23.617: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6267/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) Jan 14 12:55:23.776: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 14 12:55:23.777: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6267 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:55:23.778: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:55:23.780: INFO: ExecWithOptions: Clientset creation Jan 14 12:55:23.780: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6267/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) Jan 14 12:55:23.979: INFO: Exec stderr: "" Jan 14 12:55:23.980: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6267 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:55:23.980: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:55:23.982: INFO: ExecWithOptions: Clientset creation Jan 14 12:55:23.982: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6267/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) Jan 14 12:55:24.152: INFO: Exec stderr: "" Jan 14 12:55:24.152: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6267 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:55:24.152: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:55:24.154: INFO: ExecWithOptions: Clientset creation Jan 14 12:55:24.154: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6267/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) Jan 14 12:55:24.312: INFO: Exec stderr: "" Jan 14 12:55:24.312: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6267 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 14 12:55:24.312: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 14 12:55:24.314: INFO: ExecWithOptions: Clientset creation Jan 14 12:55:24.314: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6267/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) Jan 14 12:55:24.449: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts test/e2e/framework/framework.go:188 Jan 14 12:55:24.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "e2e-kubelet-etc-hosts-6267" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":576,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:17.520: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: Orphaning one of the Job's Pods Jan 14 12:55:20.097: INFO: Successfully updated pod "adopt-release-7xc4n" �[1mSTEP�[0m: Checking that the Job readopts the Pod Jan 14 12:55:20.097: INFO: Waiting up to 15m0s for pod "adopt-release-7xc4n" in namespace "job-6091" to be "adopted" Jan 14 12:55:20.102: INFO: Pod "adopt-release-7xc4n": Phase="Running", Reason="", readiness=true. Elapsed: 5.407005ms Jan 14 12:55:22.110: INFO: Pod "adopt-release-7xc4n": Phase="Running", Reason="", readiness=true. Elapsed: 2.012763311s Jan 14 12:55:22.110: INFO: Pod "adopt-release-7xc4n" satisfied condition "adopted" �[1mSTEP�[0m: Removing the labels from the Job's Pod Jan 14 12:55:22.629: INFO: Successfully updated pod "adopt-release-7xc4n" �[1mSTEP�[0m: Checking that the Job releases the Pod Jan 14 12:55:22.630: INFO: Waiting up to 15m0s for pod "adopt-release-7xc4n" in namespace "job-6091" to be "released" Jan 14 12:55:22.640: INFO: Pod "adopt-release-7xc4n": Phase="Running", Reason="", readiness=true. Elapsed: 10.504818ms Jan 14 12:55:24.648: INFO: Pod "adopt-release-7xc4n": Phase="Running", Reason="", readiness=true. Elapsed: 2.018290078s Jan 14 12:55:24.648: INFO: Pod "adopt-release-7xc4n" satisfied condition "released" [AfterEach] [sig-apps] Job test/e2e/framework/framework.go:188 Jan 14 12:55:24.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-6091" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":32,"skipped":744,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:24.609: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController test/e2e/apps/disruption.go:71 [It] should update/patch PodDisruptionBudget status [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Updating PodDisruptionBudget status �[1mSTEP�[0m: Waiting for all pods to be running Jan 14 12:55:26.740: INFO: running pods: 0 < 1 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Patching PodDisruptionBudget status �[1mSTEP�[0m: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController test/e2e/framework/framework.go:188 Jan 14 12:55:28.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-4419" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":35,"skipped":629,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:28.919: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-511874ea-f8f7-44d7-9c6c-6f0affb0072d �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 14 12:55:28.974: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9f28d703-d780-4cb5-adaa-f428a449e85f" in namespace "projected-7138" to be "Succeeded or Failed" Jan 14 12:55:28.980: INFO: Pod "pod-projected-configmaps-9f28d703-d780-4cb5-adaa-f428a449e85f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342855ms Jan 14 12:55:30.994: INFO: Pod "pod-projected-configmaps-9f28d703-d780-4cb5-adaa-f428a449e85f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020427285s Jan 14 12:55:32.999: INFO: Pod "pod-projected-configmaps-9f28d703-d780-4cb5-adaa-f428a449e85f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025439782s �[1mSTEP�[0m: Saw pod success Jan 14 12:55:32.999: INFO: Pod "pod-projected-configmaps-9f28d703-d780-4cb5-adaa-f428a449e85f" satisfied condition "Succeeded or Failed" Jan 14 12:55:33.005: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-g557ne pod pod-projected-configmaps-9f28d703-d780-4cb5-adaa-f428a449e85f container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:55:33.040: INFO: Waiting for pod pod-projected-configmaps-9f28d703-d780-4cb5-adaa-f428a449e85f to disappear Jan 14 12:55:33.045: INFO: Pod pod-projected-configmaps-9f28d703-d780-4cb5-adaa-f428a449e85f no longer exists [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:188 Jan 14 12:55:33.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7138" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":666,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:24.702: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status captures replication controller creation �[1mSTEP�[0m: Deleting a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/framework.go:188 Jan 14 12:55:35.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-8345" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":33,"skipped":754,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Containers test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:33.198: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test override arguments Jan 14 12:55:33.257: INFO: Waiting up to 5m0s for pod "client-containers-0c772382-2015-437c-a218-f30c8d91e5c9" in namespace "containers-3949" to be "Succeeded or Failed" Jan 14 12:55:33.262: INFO: Pod "client-containers-0c772382-2015-437c-a218-f30c8d91e5c9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.376971ms Jan 14 12:55:35.271: INFO: Pod "client-containers-0c772382-2015-437c-a218-f30c8d91e5c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014504917s Jan 14 12:55:37.277: INFO: Pod "client-containers-0c772382-2015-437c-a218-f30c8d91e5c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020687427s �[1mSTEP�[0m: Saw pod success Jan 14 12:55:37.278: INFO: Pod "client-containers-0c772382-2015-437c-a218-f30c8d91e5c9" satisfied condition "Succeeded or Failed" Jan 14 12:55:37.282: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-md-0-wrp6b-6689dfd856-j74xk pod client-containers-0c772382-2015-437c-a218-f30c8d91e5c9 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:55:37.304: INFO: Waiting for pod client-containers-0c772382-2015-437c-a218-f30c8d91e5c9 to disappear Jan 14 12:55:37.308: INFO: Pod client-containers-0c772382-2015-437c-a218-f30c8d91e5c9 no longer exists [AfterEach] [sig-node] Containers test/e2e/framework/framework.go:188 Jan 14 12:55:37.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-3949" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":708,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:35.852: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:89 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 14 12:55:37.613: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 14 12:55:39.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 12, 55, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 12, 55, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 12, 55, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 12, 55, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 14 12:55:42.658: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io/v1 discovery document �[1mSTEP�[0m: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jan 14 12:55:42.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-431" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-431-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:104 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":34,"skipped":755,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:42.811: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:245 [BeforeEach] Kubectl logs test/e2e/kubectl/kubectl.go:1412 �[1mSTEP�[0m: creating an pod Jan 14 12:55:42.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1460 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.39 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 14 12:55:43.042: INFO: stderr: "" Jan 14 12:55:43.042: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Waiting for log generator to start. Jan 14 12:55:43.042: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 14 12:55:43.042: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1460" to be "running and ready, or succeeded" Jan 14 12:55:43.049: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.911047ms Jan 14 12:55:45.055: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.0135763s Jan 14 12:55:45.055: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 14 12:55:45.055: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Jan 14 12:55:45.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1460 logs logs-generator logs-generator' Jan 14 12:55:45.210: INFO: stderr: "" Jan 14 12:55:45.210: INFO: stdout: "I0114 12:55:44.125411 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/4w8 297\nI0114 12:55:44.326076 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/q29 232\nI0114 12:55:44.526471 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/7qdx 295\nI0114 12:55:44.726054 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/2sf6 554\nI0114 12:55:44.925889 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/dq4 380\nI0114 12:55:45.126332 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/bzbd 545\n" Jan 14 12:55:47.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1460 logs logs-generator logs-generator' Jan 14 12:55:47.355: INFO: stderr: "" Jan 14 12:55:47.355: INFO: stdout: "I0114 12:55:44.125411 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/4w8 297\nI0114 12:55:44.326076 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/q29 232\nI0114 12:55:44.526471 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/7qdx 295\nI0114 12:55:44.726054 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/2sf6 554\nI0114 12:55:44.925889 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/dq4 380\nI0114 12:55:45.126332 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/bzbd 545\nI0114 12:55:45.326246 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/tmqd 296\nI0114 12:55:45.525566 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/t6qx 594\nI0114 12:55:45.726060 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/hnzm 475\nI0114 12:55:45.926441 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/d58 254\nI0114 12:55:46.125914 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/rkfw 312\nI0114 12:55:46.326378 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/gnf 435\nI0114 12:55:46.525829 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/pm45 470\nI0114 12:55:46.726308 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/47j 291\nI0114 12:55:46.925803 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/scdm 371\nI0114 12:55:47.126307 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/cfd 246\nI0114 12:55:47.325776 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/pq5 267\n" �[1mSTEP�[0m: limiting log lines Jan 14 12:55:47.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1460 logs logs-generator logs-generator --tail=1' Jan 14 12:55:47.495: INFO: stderr: "" Jan 14 12:55:47.495: INFO: stdout: "I0114 12:55:47.325776 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/pq5 267\n" Jan 14 12:55:47.495: INFO: got output "I0114 12:55:47.325776 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/pq5 267\n" �[1mSTEP�[0m: limiting log bytes Jan 14 12:55:47.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1460 logs logs-generator logs-generator --limit-bytes=1' Jan 14 12:55:47.646: INFO: stderr: "" Jan 14 12:55:47.646: INFO: stdout: "I" Jan 14 12:55:47.646: INFO: got output "I" �[1mSTEP�[0m: exposing timestamps Jan 14 12:55:47.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1460 logs logs-generator logs-generator --tail=1 --timestamps' Jan 14 12:55:47.839: INFO: stderr: "" Jan 14 12:55:47.839: INFO: stdout: "2023-01-14T12:55:47.726137460Z I0114 12:55:47.725713 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/5sbg 474\n" Jan 14 12:55:47.839: INFO: got output "2023-01-14T12:55:47.726137460Z I0114 12:55:47.725713 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/5sbg 474\n" �[1mSTEP�[0m: restricting to a time range Jan 14 12:55:50.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1460 logs logs-generator logs-generator --since=1s' Jan 14 12:55:50.487: INFO: stderr: "" Jan 14 12:55:50.487: INFO: stdout: "I0114 12:55:49.526173 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/s8d 550\nI0114 12:55:49.725748 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/lv4 346\nI0114 12:55:49.926290 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/xt4j 373\nI0114 12:55:50.125730 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/kc2 470\nI0114 12:55:50.326201 1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/fpxj 429\n" Jan 14 12:55:50.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1460 logs logs-generator logs-generator --since=24h' Jan 14 12:55:50.645: INFO: stderr: "" Jan 14 12:55:50.645: INFO: stdout: "I0114 12:55:44.125411 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/4w8 297\nI0114 12:55:44.326076 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/q29 232\nI0114 12:55:44.526471 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/7qdx 295\nI0114 12:55:44.726054 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/2sf6 554\nI0114 12:55:44.925889 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/dq4 380\nI0114 12:55:45.126332 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/bzbd 545\nI0114 12:55:45.326246 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/tmqd 296\nI0114 12:55:45.525566 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/t6qx 594\nI0114 12:55:45.726060 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/hnzm 475\nI0114 12:55:45.926441 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/d58 254\nI0114 12:55:46.125914 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/rkfw 312\nI0114 12:55:46.326378 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/gnf 435\nI0114 12:55:46.525829 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/pm45 470\nI0114 12:55:46.726308 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/47j 291\nI0114 12:55:46.925803 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/scdm 371\nI0114 12:55:47.126307 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/cfd 246\nI0114 12:55:47.325776 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/pq5 267\nI0114 12:55:47.526243 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/xz2 555\nI0114 12:55:47.725713 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/5sbg 474\nI0114 12:55:47.926332 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/25n 417\nI0114 12:55:48.125766 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/jjct 562\nI0114 12:55:48.326275 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/s47 412\nI0114 12:55:48.525809 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/5c9j 311\nI0114 12:55:48.726230 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/zzb 536\nI0114 12:55:48.925566 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/t59g 235\nI0114 12:55:49.126255 1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/b8m 457\nI0114 12:55:49.325715 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/jqh 453\nI0114 12:55:49.526173 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/s8d 550\nI0114 12:55:49.725748 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/lv4 346\nI0114 12:55:49.926290 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/xt4j 373\nI0114 12:55:50.125730 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/kc2 470\nI0114 12:55:50.326201 1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/fpxj 429\nI0114 12:55:50.525889 1 logs_generator.go:76] 32 GET /api/v1/namespaces/kube-system/pods/vrz 501\n" [AfterEach] Kubectl logs test/e2e/kubectl/kubectl.go:1417 Jan 14 12:55:50.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1460 delete pod logs-generator' Jan 14 12:55:51.516: INFO: stderr: "" Jan 14 12:55:51.516: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:188 Jan 14 12:55:51.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1460" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":35,"skipped":757,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:55:51.557: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice test/e2e/network/endpointslice.go:51 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: referencing a single matching pod �[1mSTEP�[0m: referencing matching pods with named port �[1mSTEP�[0m: creating empty Endpoints and EndpointSlices for no matching Pods �[1mSTEP�[0m: recreating EndpointSlices after they've been deleted Jan 14 12:56:11.920: INFO: EndpointSlice for Service endpointslice-9250/example-named-port not found [AfterEach] [sig-network] EndpointSlice test/e2e/framework/framework.go:188 Jan 14 12:56:21.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-9250" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":36,"skipped":765,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:56:21.971: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:43 [It] should provide podname only [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 14 12:56:22.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e996dbb0-3876-4f25-8728-13df0dccaf4f" in namespace "projected-9457" to be "Succeeded or Failed" Jan 14 12:56:22.028: INFO: Pod "downwardapi-volume-e996dbb0-3876-4f25-8728-13df0dccaf4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057004ms Jan 14 12:56:24.033: INFO: Pod "downwardapi-volume-e996dbb0-3876-4f25-8728-13df0dccaf4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011379727s Jan 14 12:56:26.041: INFO: Pod "downwardapi-volume-e996dbb0-3876-4f25-8728-13df0dccaf4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01870294s Jan 14 12:56:28.050: INFO: Pod "downwardapi-volume-e996dbb0-3876-4f25-8728-13df0dccaf4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028468864s �[1mSTEP�[0m: Saw pod success Jan 14 12:56:28.051: INFO: Pod "downwardapi-volume-e996dbb0-3876-4f25-8728-13df0dccaf4f" satisfied condition "Succeeded or Failed" Jan 14 12:56:28.055: INFO: Trying to get logs from node k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3 pod downwardapi-volume-e996dbb0-3876-4f25-8728-13df0dccaf4f container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 14 12:56:28.079: INFO: Waiting for pod downwardapi-volume-e996dbb0-3876-4f25-8728-13df0dccaf4f to disappear Jan 14 12:56:28.083: INFO: Pod downwardapi-volume-e996dbb0-3876-4f25-8728-13df0dccaf4f no longer exists [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/framework.go:188 Jan 14 12:56:28.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9457" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":773,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:56:28.120: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/common/node/init_container.go:164 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating the pod Jan 14 12:56:28.148: INFO: PodSpec: initContainers in spec.initContainers Jan 14 12:57:14.254: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f1dea44d-9851-4b50-bc0f-8f135c76fe85", GenerateName:"", Namespace:"init-container-5577", SelfLink:"", UID:"e767dfb3-7708-4184-ab75-ff6da0cb9997", ResourceVersion:"11972", Generation:0, CreationTimestamp:time.Date(2023, time.January, 14, 12, 56, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"148451908"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 14, 12, 56, 28, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003b4f530), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 14, 12, 56, 30, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003b4f560), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-drrzn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0018c1940), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-drrzn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-drrzn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.7", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-drrzn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0047bb1b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003865ea0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0047bb230)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0047bb250)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0047bb258), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0047bb25c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003b3d010), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 12, 56, 28, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 12, 56, 28, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 12, 56, 28, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 12, 56, 28, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"192.168.6.72", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.6.72"}}, StartTime:time.Date(2023, time.January, 14, 12, 56, 28, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003865f80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003488000)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://3397d46fe27e5c2f6f3fb9fb787d565b4eb7fad3a25e57c1ab39d923fcde0932", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0018c1ba0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0018c1ae0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.7", ImageID:"", ContainerID:"", Started:(*bool)(0xc0047bb2ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/framework.go:188 Jan 14 12:57:14.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-5577" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":38,"skipped":782,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":15,"skipped":276,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:53:03.494: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a suspended cronjob �[1mSTEP�[0m: Ensuring no jobs are scheduled �[1mSTEP�[0m: Ensuring no job exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:188 Jan 14 12:58:03.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-3923" for this suite. �[32m• [SLOW TEST:300.143 seconds]�[0m [sig-apps] CronJob �[90mtest/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":16,"skipped":276,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:58:03.669: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD with validation schema [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:58:03.708: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: kubectl validation (kubectl create and apply) allows request with known and required properties Jan 14 12:58:06.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 --namespace=crd-publish-openapi-4421 create -f -' Jan 14 12:58:08.728: INFO: stderr: "" Jan 14 12:58:08.728: INFO: stdout: "e2e-test-crd-publish-openapi-9770-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 14 12:58:08.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 --namespace=crd-publish-openapi-4421 delete e2e-test-crd-publish-openapi-9770-crds test-foo' Jan 14 12:58:08.908: INFO: stderr: "" Jan 14 12:58:08.909: INFO: stdout: "e2e-test-crd-publish-openapi-9770-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 14 12:58:08.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 --namespace=crd-publish-openapi-4421 apply -f -' Jan 14 12:58:09.446: INFO: stderr: "" Jan 14 12:58:09.446: INFO: stdout: "e2e-test-crd-publish-openapi-9770-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 14 12:58:09.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 --namespace=crd-publish-openapi-4421 delete e2e-test-crd-publish-openapi-9770-crds test-foo' Jan 14 12:58:09.588: INFO: stderr: "" Jan 14 12:58:09.589: INFO: stdout: "e2e-test-crd-publish-openapi-9770-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values Jan 14 12:58:09.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 --namespace=crd-publish-openapi-4421 create -f -' Jan 14 12:58:09.864: INFO: rc: 1 �[1mSTEP�[0m: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 14 12:58:09.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 --namespace=crd-publish-openapi-4421 create -f -' Jan 14 12:58:10.112: INFO: rc: 1 Jan 14 12:58:10.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 --namespace=crd-publish-openapi-4421 apply -f -' Jan 14 12:58:10.362: INFO: rc: 1 �[1mSTEP�[0m: kubectl validation (kubectl create and apply) rejects request without required properties Jan 14 12:58:10.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 --namespace=crd-publish-openapi-4421 create -f -' Jan 14 12:58:10.639: INFO: rc: 1 Jan 14 12:58:10.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 --namespace=crd-publish-openapi-4421 apply -f -' Jan 14 12:58:10.901: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Jan 14 12:58:10.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 explain e2e-test-crd-publish-openapi-9770-crds' Jan 14 12:58:11.175: INFO: stderr: "" Jan 14 12:58:11.175: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9770-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Jan 14 12:58:11.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 explain e2e-test-crd-publish-openapi-9770-crds.metadata' Jan 14 12:58:11.459: INFO: stderr: "" Jan 14 12:58:11.459: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9770-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n Deprecated: ClusterName is a legacy field that was always cleared by the\n system and never used; it will be removed completely in 1.25.\n\n The name in the go struct is changed to help clients detect accidental use.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 14 12:58:11.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 explain e2e-test-crd-publish-openapi-9770-crds.spec' Jan 14 12:58:11.774: INFO: stderr: "" Jan 14 12:58:11.774: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9770-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 14 12:58:11.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 explain e2e-test-crd-publish-openapi-9770-crds.spec.bars' Jan 14 12:58:12.057: INFO: stderr: "" Jan 14 12:58:12.057: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9770-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Jan 14 12:58:12.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4421 explain e2e-test-crd-publish-openapi-9770-crds.spec.bars2' Jan 14 12:58:12.323: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:188 Jan 14 12:58:15.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-4421" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":17,"skipped":286,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 14 12:58:15.706: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment test/e2e/apps/deployment.go:91 [It] deployment should delete old replica sets [Conformance] test/e2e/framework/framework.go:652 Jan 14 12:58:15.740: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 14 12:58:20.747: INFO: Pod name cleanup-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 14 12:58:20.747: INFO: Creating deployment test-cleanup-deployment �[1mSTEP�[0m: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment test/e2e/apps/deployment.go:84 Jan 14 12:58:20.767: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3304 0b2e8273-7f21-4690-b59d-e8a807e05cc9 12159 1 2023-01-14 12:58:20 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-01-14 12:58:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f3d268 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 14 12:58:20.771: INFO: New ReplicaSet "test-cleanup-deployment-6755c7b765" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6755c7b765 deployment-3304 f8ea00ac-221b-45ae-9acb-fed4f6b2a235 12161 1 2023-01-14 12:58:20 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:6755c7b765] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 0b2e8273-7f21-4690-b59d-e8a807e05cc9 0xc004f3d857 0xc004f3d858}] [] [{kube-controller-manager Update apps/v1 2023-01-14 12:58:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b2e8273-7f21-4690-b59d-e8a807e05cc9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6755c7b765,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:6755c7b765] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f3d8f8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 14 12:58:20.771: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 14 12:58:20.771: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3304 34f6d57c-c462-41d1-a5d2-71c0e6f2b446 12160 1 2023-01-14 12:58:15 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 0b2e8273-7f21-4690-b59d-e8a807e05cc9 0xc004f3d6a7 0xc004f3d6a8}] [] [{e2e.test Update apps/v1 2023-01-14 12:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 12:58:17 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-01-14 12:58:20 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"0b2e8273-7f21-4690-b59d-e8a807e05cc9\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004f3d7d8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 14 12:58:20.782: INFO: Pod "test-cleanup-controller-zbddb" is available: &Pod{ObjectMeta:{test-cleanup-controller-zbddb test-cleanup-controller- deployment-3304 5b59e86e-9714-466b-bdc0-c2de01cfaa8f 12145 0 2023-01-14 12:58:15 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 34f6d57c-c462-41d1-a5d2-71c0e6f2b446 0xc0048dffa7 0xc0048dffa8}] [] [{kube-controller-manager Update v1 2023-01-14 12:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"34f6d57c-c462-41d1-a5d2-71c0e6f2b446\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 12:58:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-f7557,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f7557,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-ihjwwi-worker-1ixaq3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:58:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:58:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 12:58:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.73,StartTime:2023-01-14 12:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 12:58:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://09ca6ecc52d942424b650e0ec0971bdb679a4872adec42b945c5a36e04936f06,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 12:58:20.782: INFO: Pod "test-cleanup-deployment-6755c7b765-bxk6k" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6755c7b765-bxk6k test-cleanup-deployment-6755c7b765- deployment-3304 ba50fbc7-a8e1-411f-963f-3cd07ed61c1b 12163 0 2023-01-14 12:58:20 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:6755c7b765] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6755c7b765 f8ea00ac-221b-45ae-9acb-fed4f6b2a235 0xc004fba2b7 0xc004fba2b8}] [] [{kube-controller-manager Update v1 2023-01-14 12:58:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8ea00ac-221b-45ae-9acb-fed4f6b2a235\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rgqbc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rgqbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedN