Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 2 succeeded |
Started | |
Elapsed | 4h3m |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error: <*errors.withStack | 0xc000d8f080>: { error: <*errors.withMessage | 0xc000e08b00>{ cause: <*errors.errorString | 0xc000111ee0>{ s: "error container run failed with exit code 137", }, msg: "Unable to run conformance tests", }, stack: [0x33843b9, 0x3612527, 0x193033b, 0x1943e38, 0x14c5741], } Unable to run conformance tests: error container run failed with exit code 137 occurred In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/28/23 03:04:21.064 There were additional failures detected after the initial failure. These are visible in the timeline
> Enter [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/27/23 23:13:14.129 INFO: Cluster name is capz-conf-cdfcgm STEP: Creating namespace "capz-conf-cdfcgm" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:13:14.129 Jan 27 23:13:14.129: INFO: starting to create namespace for hosting the "capz-conf-cdfcgm" test spec INFO: Creating namespace capz-conf-cdfcgm INFO: Creating event watcher for namespace "capz-conf-cdfcgm" < Exit [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/27/23 23:13:14.182 (53ms) > Enter [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/27/23 23:13:14.182 conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/27/23 23:13:14.182 conformance-tests Name | N | Min | Median | Mean | StdDev | Max ================================================================================================ cluster creation [duration] | 1 | 16m27.0908s | 16m27.0908s | 16m27.0908s | 0s | 16m27.0908s INFO: Creating the workload cluster with name "capz-conf-cdfcgm" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.24.11-rc.0.6+7c685ed7305e76, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-cdfcgm --infrastructure (default) --kubernetes-version v1.24.11-rc.0.6+7c685ed7305e76 --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-ci-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/27/23 23:13:17.01 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/27/23 23:15:17.109 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/27/23 23:15:17.109 Jan 27 23:17:57.368: INFO: getting history for release projectcalico Jan 27 23:17:57.431: INFO: Release projectcalico does not exist, installing it Jan 27 23:17:58.814: INFO: creating 1 resource(s) Jan 27 23:17:58.900: INFO: creating 1 resource(s) Jan 27 23:17:58.976: INFO: creating 1 resource(s) Jan 27 23:17:59.049: INFO: creating 1 resource(s) Jan 27 23:17:59.136: INFO: creating 1 resource(s) Jan 27 23:17:59.210: INFO: creating 1 resource(s) Jan 27 23:17:59.393: INFO: creating 1 resource(s) Jan 27 23:17:59.506: INFO: creating 1 resource(s) Jan 27 23:17:59.579: INFO: creating 1 resource(s) Jan 27 23:17:59.656: INFO: creating 1 resource(s) Jan 27 23:17:59.731: INFO: creating 1 resource(s) Jan 27 23:17:59.804: INFO: creating 1 resource(s) Jan 27 23:17:59.879: INFO: creating 1 resource(s) Jan 27 23:17:59.954: INFO: creating 1 resource(s) Jan 27 23:18:00.027: INFO: creating 1 resource(s) Jan 27 23:18:00.114: INFO: creating 1 resource(s) Jan 27 23:18:00.238: INFO: creating 1 resource(s) Jan 27 23:18:00.315: INFO: creating 1 resource(s) Jan 27 23:18:00.433: INFO: creating 1 resource(s) Jan 27 23:18:00.565: INFO: creating 1 resource(s) Jan 27 23:18:01.021: INFO: creating 1 resource(s) Jan 27 23:18:01.103: INFO: Clearing discovery cache Jan 27 23:18:01.103: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 27 23:18:04.658: INFO: creating 1 resource(s) Jan 27 23:18:05.304: INFO: creating 6 resource(s) Jan 27 23:18:06.077: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/27/23 23:18:06.742 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:18:07.191 Jan 27 23:18:07.191: INFO: starting to wait for deployment to become available Jan 27 23:18:17.317: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.12604972s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/27/23 23:18:18.715 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:18:19.281 Jan 27 23:18:19.281: INFO: starting to wait for deployment to become available Jan 27 23:19:09.662: INFO: Deployment calico-system/calico-kube-controllers is now available, took 50.38106s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:19:10.467 Jan 27 23:19:10.467: INFO: starting to wait for deployment to become available Jan 27 23:19:10.673: INFO: Deployment calico-system/calico-typha is now available, took 205.342128ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/27/23 23:19:10.673 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:19:11.357 Jan 27 23:19:11.357: INFO: starting to wait for deployment to become available Jan 27 23:19:31.548: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.19064935s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/27/23 23:19:31.548 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:19:32.202 Jan 27 23:19:32.202: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 27 23:19:32.265: INFO: 1 daemonset calico-system/calico-node pods are running, took 63.004ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:91 @ 01/27/23 23:19:32.265 STEP: waiting for daemonset calico-system/calico-node-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:19:32.713 Jan 27 23:19:32.713: INFO: waiting for daemonset calico-system/calico-node-windows to be complete Jan 27 23:19:32.776: INFO: 0 daemonset calico-system/calico-node-windows pods are running, took 62.856226ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:97 @ 01/27/23 23:19:32.776 STEP: waiting for daemonset kube-system/kube-proxy-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:19:33.225 Jan 27 23:19:33.225: INFO: waiting for daemonset kube-system/kube-proxy-windows to be complete Jan 27 23:19:33.287: INFO: 0 daemonset kube-system/kube-proxy-windows pods are running, took 62.174421ms INFO: Waiting for the first control plane machine managed by capz-conf-cdfcgm/capz-conf-cdfcgm-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/27/23 23:19:33.321 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/27/23 23:19:33.335 Jan 27 23:19:33.432: INFO: getting history for release azuredisk-csi-driver-oot Jan 27 23:19:33.495: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 27 23:19:37.661: INFO: creating 1 resource(s) Jan 27 23:19:37.869: INFO: creating 18 resource(s) Jan 27 23:19:38.371: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/27/23 23:19:38.388 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:19:38.69 Jan 27 23:19:38.690: INFO: starting to wait for deployment to become available Jan 27 23:20:19.736: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 41.045848671s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/27/23 23:20:19.736 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:20:20.051 Jan 27 23:20:20.051: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 27 23:20:20.115: INFO: 1 daemonset kube-system/csi-azuredisk-node pods are running, took 63.200526ms STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 23:20:20.429 Jan 27 23:20:20.429: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 27 23:20:20.492: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 62.469636ms INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-cdfcgm/capz-conf-cdfcgm-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/27/23 23:20:20.504 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/27/23 23:20:20.51 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/27/23 23:20:20.536 STEP: Checking all the machines controlled by capz-conf-cdfcgm-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/27/23 23:20:20.549 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/27/23 23:20:20.559 STEP: Checking all the machines controlled by capz-conf-cdfcgm-md-win are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/27/23 23:29:41.329 INFO: Waiting for the machine pools to be provisioned INFO: Using repo-list '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/kubetest/repo-list.yaml' for version 'v1.24.11-rc.0.6+7c685ed7305e76' STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-nodes=1" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=2" "-ginkgo.skip=\\[LinuxOnly\\]|\\[Excluded:WindowsDocker\\]|device.plugin.for.Windows" "-prepull-images=true" "-dump-logs-on-failure=true" "-ginkgo.focus=(\\[sig-windows\\]|\\[sig-scheduling\\].SchedulerPreemption|\\[sig-autoscaling\\].\\[Feature:HPA\\]|\\[sig-apps\\].CronJob).*(\\[Serial\\]|\\[Slow\\])|(\\[Serial\\]|\\[Slow\\]).*(\\[Conformance\\]|\\[NodeConformance\\])|\\[sig-api-machinery\\].Garbage.collector" "-ginkgo.progress=true" "-ginkgo.trace=true" "-ginkgo.v=true" "-node-os-distro=windows" "-disable-log-dump=true" "-ginkgo.flakeAttempts=0" "-ginkgo.slowSpecThreshold=120"] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/27/23 23:29:41.759 I0127 23:29:48.232616 14 e2e.go:129] Starting e2e run "9c6d9b3e-6664-456b-b500-1b5b1128e8b8" on Ginkgo node 1 {"msg":"Test Suite starting","total":61,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1674862188�[0m - Will randomize all specs Will run �[1m61�[0m of �[1m6973�[0m specs Jan 27 23:29:50.294: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 23:29:50.296: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 27 23:29:50.602: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 27 23:29:50.853: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:50.853: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:50.853: INFO: The status of Pod csi-proxy-rhfls is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:50.853: INFO: 15 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 27 23:29:50.853: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:29:50.853: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:29:50.853: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:29:50.853: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:29:50.853: INFO: csi-proxy-rhfls capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:29:50.853: INFO: Jan 27 23:29:53.100: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:53.100: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:53.100: INFO: The status of Pod csi-proxy-rhfls is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:53.100: INFO: 15 / 18 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Jan 27 23:29:53.100: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:29:53.100: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:29:53.100: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:29:53.100: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:29:53.100: INFO: csi-proxy-rhfls capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:29:53.100: INFO: Jan 27 23:29:55.104: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:55.104: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:55.104: INFO: The status of Pod csi-proxy-rhfls is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:55.104: INFO: 15 / 18 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Jan 27 23:29:55.104: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:29:55.104: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:29:55.104: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:29:55.104: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:29:55.104: INFO: csi-proxy-rhfls capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:29:55.104: INFO: Jan 27 23:29:57.098: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:57.098: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:57.098: INFO: The status of Pod csi-proxy-rhfls is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:57.098: INFO: 15 / 18 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Jan 27 23:29:57.098: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:29:57.098: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:29:57.098: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:29:57.098: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:29:57.098: INFO: csi-proxy-rhfls capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [csi-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:29:57.098: INFO: Jan 27 23:29:59.097: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:59.097: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:29:59.097: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Jan 27 23:29:59.097: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:29:59.097: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:29:59.097: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:29:59.097: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:29:59.097: INFO: Jan 27 23:30:01.098: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:01.098: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:01.098: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Jan 27 23:30:01.098: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:01.098: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:30:01.098: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:30:01.098: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:30:01.098: INFO: Jan 27 23:30:03.099: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:03.099: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:03.099: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Jan 27 23:30:03.099: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:03.099: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:30:03.099: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:30:03.099: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:30:03.099: INFO: Jan 27 23:30:05.099: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:05.099: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:05.099: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Jan 27 23:30:05.099: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:05.099: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:30:05.099: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:30:05.099: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:30:05.099: INFO: Jan 27 23:30:07.100: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:07.100: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:07.100: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Jan 27 23:30:07.100: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:07.100: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:30:07.100: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:30:07.100: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:30:07.100: INFO: Jan 27 23:30:09.114: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:09.114: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:09.114: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Jan 27 23:30:09.115: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:09.115: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:30:09.115: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:30:09.115: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:30:09.115: INFO: Jan 27 23:30:11.101: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:11.101: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:11.101: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Jan 27 23:30:11.101: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:11.101: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:30:11.101: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:30:11.102: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:30:11.102: INFO: Jan 27 23:30:13.099: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:13.099: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:13.099: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Jan 27 23:30:13.099: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:13.099: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:30:13.099: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:30:13.099: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:30:13.099: INFO: Jan 27 23:30:15.099: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:15.099: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:15.099: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) Jan 27 23:30:15.099: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:15.099: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:30:15.099: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:30:15.099: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:30:15.099: INFO: Jan 27 23:30:17.098: INFO: The status of Pod csi-azuredisk-node-win-p4gmt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:17.098: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:17.098: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) Jan 27 23:30:17.098: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:17.098: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:30:17.098: INFO: csi-azuredisk-node-win-p4gmt capz-conf-mpgmr Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:18 +0000 UTC }] Jan 27 23:30:17.099: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:30:17.099: INFO: Jan 27 23:30:19.097: INFO: The status of Pod csi-azuredisk-node-win-qcbvj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 23:30:19.097: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (28 seconds elapsed) Jan 27 23:30:19.097: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:19.097: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:30:19.097: INFO: csi-azuredisk-node-win-qcbvj capz-conf-x4p77 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 23:29:22 +0000 UTC }] Jan 27 23:30:19.097: INFO: Jan 27 23:30:21.098: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (30 seconds elapsed) Jan 27 23:30:21.098: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 23:30:21.098: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 27 23:30:21.197: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'containerd-logger' (0 seconds elapsed) Jan 27 23:30:21.197: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node' (0 seconds elapsed) Jan 27 23:30:21.197: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node-win' (0 seconds elapsed) Jan 27 23:30:21.197: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-proxy' (0 seconds elapsed) Jan 27 23:30:21.197: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 27 23:30:21.197: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-windows' (0 seconds elapsed) Jan 27 23:30:21.197: INFO: Pre-pulling images so that they are cached for the tests. Jan 27 23:30:21.650: INFO: Waiting for img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39 Jan 27 23:30:21.737: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:30:21.836: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39: 0 Jan 27 23:30:21.836: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:30:30.921: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:30:31.019: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39: 0 Jan 27 23:30:31.019: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:30:39.919: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:30:40.013: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39: 0 Jan 27 23:30:40.013: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:30:48.917: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:30:49.012: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39: 1 Jan 27 23:30:49.012: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:30:57.923: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:30:58.017: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39: 2 Jan 27 23:30:58.017: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39 Jan 27 23:30:58.017: INFO: Waiting for img-pull-k8s.gcr.io-e2e-test-images-busybox-1.29-2 Jan 27 23:30:58.100: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:30:58.195: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-busybox-1.29-2: 2 Jan 27 23:30:58.195: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-k8s.gcr.io-e2e-test-images-busybox-1.29-2 Jan 27 23:30:58.195: INFO: Waiting for img-pull-k8s.gcr.io-e2e-test-images-httpd-2.4.38-2 Jan 27 23:30:58.278: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:30:58.371: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-httpd-2.4.38-2: 2 Jan 27 23:30:58.371: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-k8s.gcr.io-e2e-test-images-httpd-2.4.38-2 Jan 27 23:30:58.371: INFO: Waiting for img-pull-k8s.gcr.io-e2e-test-images-nginx-1.14-2 Jan 27 23:30:58.454: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:30:58.547: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-nginx-1.14-2: 2 Jan 27 23:30:58.547: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-k8s.gcr.io-e2e-test-images-nginx-1.14-2 Jan 27 23:30:58.611: INFO: e2e test version: v1.24.11-rc.0.6+7c685ed7305e76 Jan 27 23:30:58.673: INFO: kube-apiserver version: v1.24.11-rc.0.6+7c685ed7305e76 Jan 27 23:30:58.673: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 23:30:58.737: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould not schedule jobs when suspended [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 27 23:30:58.738: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob W0127 23:30:59.015499 14 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 27 23:30:59.015: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jan 27 23:30:59.080: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a suspended cronjob �[1mSTEP�[0m: Ensuring no jobs are scheduled �[1mSTEP�[0m: Ensuring no job exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:188 Jan 27 23:35:59.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-6005" for this suite. �[32m• [SLOW TEST:301.052 seconds]�[0m [sig-apps] CronJob �[90mtest/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":61,"completed":1,"skipped":33,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould not schedule new jobs when ForbidConcurrent [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 27 23:35:59.795: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a ForbidConcurrent cronjob �[1mSTEP�[0m: Ensuring a job is scheduled �[1mSTEP�[0m: Ensuring exactly one is scheduled �[1mSTEP�[0m: Ensuring exactly one running job exists by listing jobs explicitly �[1mSTEP�[0m: Ensuring no more jobs are scheduled �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:188 Jan 27 23:42:00.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-5623" for this suite. �[32m• [SLOW TEST:361.022 seconds]�[0m [sig-apps] CronJob �[90mtest/e2e/apps/framework.go:23�[0m should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":61,"completed":2,"skipped":293,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould verify changes to a daemon set status [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 27 23:42:00.818: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should verify changes to a daemon set status [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating simple DaemonSet "daemon-set" �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 27 23:42:01.737: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:42:01.803: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:01.803: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:02.881: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:42:02.951: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:02.951: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:03.882: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:42:03.948: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:03.948: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:04.882: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:42:04.948: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:04.948: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:05.882: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 23:42:05.949: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 27 23:42:05.949: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP�[0m: Getting /status Jan 27 23:42:06.073: INFO: Daemon Set daemon-set has Conditions: [] �[1mSTEP�[0m: updating the DaemonSet Status Jan 27 23:42:06.202: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the daemon set status to be updated Jan 27 23:42:06.266: INFO: Observed &DaemonSet event: ADDED Jan 27 23:42:06.266: INFO: Observed &DaemonSet event: MODIFIED Jan 27 23:42:06.268: INFO: Observed &DaemonSet event: MODIFIED Jan 27 23:42:06.268: INFO: Observed &DaemonSet event: MODIFIED Jan 27 23:42:06.268: INFO: Found daemon set daemon-set in namespace daemonsets-1822 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Jan 27 23:42:06.268: INFO: Daemon set daemon-set has an updated status �[1mSTEP�[0m: patching the DaemonSet Status �[1mSTEP�[0m: watching for the daemon set status to be patched Jan 27 23:42:06.398: INFO: Observed &DaemonSet event: ADDED Jan 27 23:42:06.398: INFO: Observed &DaemonSet event: MODIFIED Jan 27 23:42:06.400: INFO: Observed &DaemonSet event: MODIFIED Jan 27 23:42:06.400: INFO: Observed &DaemonSet event: MODIFIED Jan 27 23:42:06.402: INFO: Observed daemon set daemon-set in namespace daemonsets-1822 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Jan 27 23:42:06.402: INFO: Observed &DaemonSet event: MODIFIED Jan 27 23:42:06.403: INFO: Found daemon set daemon-set in namespace daemonsets-1822 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Jan 27 23:42:06.403: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1822, will wait for the garbage collector to delete the pods Jan 27 23:42:06.692: INFO: Deleting DaemonSet.extensions daemon-set took: 64.453787ms Jan 27 23:42:06.793: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.410064ms Jan 27 23:42:12.055: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:12.055: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 27 23:42:12.117: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"5457"},"items":null} Jan 27 23:42:12.178: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"5457"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 27 23:42:12.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-1822" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":61,"completed":3,"skipped":308,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould run and stop complex daemon [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 27 23:42:12.531: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop complex daemon [Conformance] test/e2e/framework/framework.go:652 Jan 27 23:42:13.272: INFO: Creating daemon "daemon-set" with a node selector �[1mSTEP�[0m: Initially, daemon pods should not be running on any nodes. Jan 27 23:42:13.400: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:13.400: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP�[0m: Change node label to blue, check that daemon pod is launched. Jan 27 23:42:13.700: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:13.700: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:14.766: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:14.766: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:15.765: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:15.765: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:16.765: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:16.765: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:17.767: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 23:42:17.767: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set �[1mSTEP�[0m: Update the node label to green, and wait for daemons to be unscheduled Jan 27 23:42:18.032: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:18.032: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP�[0m: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 27 23:42:18.163: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:18.163: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:19.228: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:19.228: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:20.228: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:20.228: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:21.227: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:21.227: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:22.227: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:22.227: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:23.227: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:23.227: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:24.229: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:24.229: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:25.227: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:25.227: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:26.227: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:26.227: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:27.227: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:27.227: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:28.227: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:28.227: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 27 23:42:29.228: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 23:42:29.228: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4435, will wait for the garbage collector to delete the pods Jan 27 23:42:29.579: INFO: Deleting DaemonSet.extensions daemon-set took: 64.530378ms Jan 27 23:42:29.680: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.441174ms Jan 27 23:42:34.642: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 23:42:34.642: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 27 23:42:34.704: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"5603"},"items":null} Jan 27 23:42:34.766: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"5603"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 27 23:42:35.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-4435" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":61,"completed":4,"skipped":329,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete jobs and pods created by cronjob�[0m �[37mtest/e2e/apimachinery/garbage_collector.go:1145�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 27 23:42:35.195: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete jobs and pods created by cronjob test/e2e/apimachinery/garbage_collector.go:1145 �[1mSTEP�[0m: Create the cronjob �[1mSTEP�[0m: Wait for the CronJob to create new Job �[1mSTEP�[0m: Delete the cronjob �[1mSTEP�[0m: Verify if cronjob does not leave jobs nor pods behind �[1mSTEP�[0m: Gathering metrics Jan 27 23:43:00.843: INFO: The status of Pod kube-controller-manager-capz-conf-cdfcgm-control-plane-t22kx is Running (Ready = true) Jan 27 23:43:01.373: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 27 23:43:01.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-6094" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":61,"completed":5,"skipped":372,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates resource limits of pods that are allowed to run [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 27 23:43:01.512: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-pred �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 27 23:43:01.947: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 27 23:43:02.082: INFO: Waiting for terminating namespaces to be deleted... Jan 27 23:43:02.144: INFO: Logging pods the apiserver thinks is on node capz-conf-mpgmr before test Jan 27 23:43:02.215: INFO: calico-node-windows-pkjkv from calico-system started at 2023-01-27 23:28:48 +0000 UTC (2 container statuses recorded) Jan 27 23:43:02.215: INFO: Container calico-node-felix ready: true, restart count 1 Jan 27 23:43:02.215: INFO: Container calico-node-startup ready: true, restart count 0 Jan 27 23:43:02.215: INFO: containerd-logger-7b895 from kube-system started at 2023-01-27 23:28:48 +0000 UTC (1 container statuses recorded) Jan 27 23:43:02.215: INFO: Container containerd-logger ready: true, restart count 0 Jan 27 23:43:02.215: INFO: csi-azuredisk-node-win-p4gmt from kube-system started at 2023-01-27 23:29:18 +0000 UTC (3 container statuses recorded) Jan 27 23:43:02.215: INFO: Container azuredisk ready: true, restart count 0 Jan 27 23:43:02.215: INFO: Container liveness-probe ready: true, restart count 0 Jan 27 23:43:02.215: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 27 23:43:02.215: INFO: csi-proxy-mpvp5 from kube-system started at 2023-01-27 23:29:18 +0000 UTC (1 container statuses recorded) Jan 27 23:43:02.215: INFO: Container csi-proxy ready: true, restart count 0 Jan 27 23:43:02.215: INFO: kube-proxy-windows-bd49q from kube-system started at 2023-01-27 23:28:48 +0000 UTC (1 container statuses recorded) Jan 27 23:43:02.215: INFO: Container kube-proxy ready: true, restart count 0 Jan 27 23:43:02.215: INFO: Logging pods the apiserver thinks is on node capz-conf-x4p77 before test Jan 27 23:43:02.285: INFO: calico-node-windows-n6ccv from calico-system started at 2023-01-27 23:28:51 +0000 UTC (2 container statuses recorded) Jan 27 23:43:02.286: INFO: Container calico-node-felix ready: true, restart count 0 Jan 27 23:43:02.286: INFO: Container calico-node-startup ready: true, restart count 0 Jan 27 23:43:02.286: INFO: containerd-logger-bqsnj from kube-system started at 2023-01-27 23:28:51 +0000 UTC (1 container statuses recorded) Jan 27 23:43:02.286: INFO: Container containerd-logger ready: true, restart count 0 Jan 27 23:43:02.286: INFO: csi-azuredisk-node-win-qcbvj from kube-system started at 2023-01-27 23:29:22 +0000 UTC (3 container statuses recorded) Jan 27 23:43:02.286: INFO: Container azuredisk ready: true, restart count 0 Jan 27 23:43:02.286: INFO: Container liveness-probe ready: true, restart count 0 Jan 27 23:43:02.286: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 27 23:43:02.286: INFO: csi-proxy-rhfls from kube-system started at 2023-01-27 23:29:22 +0000 UTC (1 container statuses recorded) Jan 27 23:43:02.286: INFO: Container csi-proxy ready: true, restart count 0 Jan 27 23:43:02.286: INFO: kube-proxy-windows-98j6m from kube-system started at 2023-01-27 23:28:51 +0000 UTC (1 container statuses recorded) Jan 27 23:43:02.286: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: verifying the node has the label node capz-conf-mpgmr �[1mSTEP�[0m: verifying the node has the label node capz-conf-x4p77 Jan 27 23:43:02.699: INFO: Pod calico-node-windows-n6ccv requesting resource cpu=0m on Node capz-conf-x4p77 Jan 27 23:43:02.699: INFO: Pod calico-node-windows-pkjkv requesting resource cpu=0m on Node capz-conf-mpgmr Jan 27 23:43:02.699: INFO: Pod containerd-logger-7b895 requesting resource cpu=0m on Node capz-conf-mpgmr Jan 27 23:43:02.699: INFO: Pod containerd-logger-bqsnj requesting resource cpu=0m on Node capz-conf-x4p77 Jan 27 23:43:02.699: INFO: Pod csi-azuredisk-node-win-p4gmt requesting resource cpu=0m on Node capz-conf-mpgmr Jan 27 23:43:02.699: INFO: Pod csi-azuredisk-node-win-qcbvj requesting resource cpu=0m on Node capz-conf-x4p77 Jan 27 23:43:02.699: INFO: Pod csi-proxy-mpvp5 requesting resource cpu=0m on Node capz-conf-mpgmr Jan 27 23:43:02.699: INFO: Pod csi-proxy-rhfls requesting resource cpu=0m on Node capz-conf-x4p77 Jan 27 23:43:02.699: INFO: Pod kube-proxy-windows-98j6m requesting resource cpu=0m on Node capz-conf-x4p77 Jan 27 23:43:02.699: INFO: Pod kube-proxy-windows-bd49q requesting resource cpu=0m on Node capz-conf-mpgmr �[1mSTEP�[0m: Starting Pods to consume most of the cluster CPU. Jan 27 23:43:02.699: INFO: Creating a pod which consumes cpu=2800m on Node capz-conf-mpgmr Jan 27 23:43:02.765: INFO: Creating a pod which consumes cpu=2800m on Node capz-conf-x4p77 �[1mSTEP�[0m: Creating another pod that requires unavailable amount of CPU. �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-15f9609e-473f-449d-80a9-0c5c0627450c.173e4f024ed2ee3a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1605/filler-pod-15f9609e-473f-449d-80a9-0c5c0627450c to capz-conf-mpgmr] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-15f9609e-473f-449d-80a9-0c5c0627450c.173e4f02c92b0474], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.7"] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-15f9609e-473f-449d-80a9-0c5c0627450c.173e4f0613786848], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.7" in 14.1316919s] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-15f9609e-473f-449d-80a9-0c5c0627450c.173e4f061a3c2228], Reason = [Created], Message = [Created container filler-pod-15f9609e-473f-449d-80a9-0c5c0627450c] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-15f9609e-473f-449d-80a9-0c5c0627450c.173e4f066c7d4260], Reason = [Started], Message = [Started container filler-pod-15f9609e-473f-449d-80a9-0c5c0627450c] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-4b3013cd-8e6d-492c-a3db-617e4c18644b.173e4f02527b2e39], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1605/filler-pod-4b3013cd-8e6d-492c-a3db-617e4c18644b to capz-conf-x4p77] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-4b3013cd-8e6d-492c-a3db-617e4c18644b.173e4f02cc999170], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.7"] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-4b3013cd-8e6d-492c-a3db-617e4c18644b.173e4f062b5fc99c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.7" in 14.4750711s] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-4b3013cd-8e6d-492c-a3db-617e4c18644b.173e4f062ff87ad0], Reason = [Created], Message = [Created container filler-pod-4b3013cd-8e6d-492c-a3db-617e4c18644b] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-4b3013cd-8e6d-492c-a3db-617e4c18644b.173e4f0680185594], Reason = [Started], Message = [Started container filler-pod-4b3013cd-8e6d-492c-a3db-617e4c18644b] �[1mSTEP�[0m: Considering event: Type = [Warning], Name = [additional-pod.173e4f0714583268], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 Insufficient cpu. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.] �[1mSTEP�[0m: removing the label node off the node capz-conf-mpgmr �[1mSTEP�[0m: verifying the node doesn't have the label node �[1mSTEP�[0m: removing the label node off the node capz-conf-x4p77 �[1mSTEP�[0m: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 27 23:43:24.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-pred-1605" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":61,"completed":6,"skipped":485,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould cap back-off at MaxContainerBackOff [Slow][NodeConformance]�[0m �[37mtest/e2e/common/node/pods.go:723�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 27 23:43:24.874: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:191 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:723 Jan 27 23:43:25.440: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 27 23:43:27.504: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 27 23:43:29.504: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 27 23:43:31.503: INFO: The status of Pod back-off-cap is Running (Ready = true) �[1mSTEP�[0m: getting restart delay when capped Jan 27 23:55:02.432: INFO: getRestartDelay: restartCount = 7, finishedAt=2023-01-27 23:49:56 +0000 UTC restartedAt=2023-01-27 23:55:01 +0000 UTC (5m5s) Jan 28 00:00:11.877: INFO: getRestartDelay: restartCount = 8, finishedAt=2023-01-27 23:55:06 +0000 UTC restartedAt=2023-01-28 00:00:10 +0000 UTC (5m4s) Jan 28 00:05:20.037: INFO: getRestartDelay: restartCount = 9, finishedAt=2023-01-28 00:00:15 +0000 UTC restartedAt=2023-01-28 00:05:19 +0000 UTC (5m4s) �[1mSTEP�[0m: getting restart delay after a capped delay Jan 28 00:10:28.529: INFO: getRestartDelay: restartCount = 10, finishedAt=2023-01-28 00:05:24 +0000 UTC restartedAt=2023-01-28 00:10:27 +0000 UTC (5m3s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:188 Jan 28 00:10:28.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-9186" for this suite. �[32m• [SLOW TEST:1623.816 seconds]�[0m [sig-node] Pods �[90mtest/e2e/common/node/framework.go:23�[0m should cap back-off at MaxContainerBackOff [Slow][NodeConformance] �[90mtest/e2e/common/node/pods.go:723�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":61,"completed":7,"skipped":628,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[90mPriorityClass endpoints�[0m �[1mverify PriorityClass endpoints can be operated with different HTTP methods [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:10:28.692: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 28 00:10:29.313: INFO: Waiting up to 1m0s for all nodes to be ready Jan 28 00:11:29.924: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:11:29.988: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption-path �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/framework/framework.go:652 Jan 28 00:11:30.614: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jan 28 00:11:30.675: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/framework.go:188 Jan 28 00:11:30.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-path-5280" for this suite. [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 28 00:11:31.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-8044" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":61,"completed":8,"skipped":803,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] StatefulSet�[0m �[90mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mBurst scaling should run to completion even with unhealthy pods [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:11:31.673: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP�[0m: Creating service test in namespace statefulset-8104 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-8104 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-8104 Jan 28 00:11:32.292: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 28 00:11:42.357: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 28 00:11:42.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 00:11:43.567: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 28 00:11:43.567: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 00:11:43.567: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 00:11:43.631: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 28 00:11:43.631: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 00:11:43.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999636s Jan 28 00:11:44.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.931218147s Jan 28 00:11:46.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.863240827s Jan 28 00:11:47.091: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.79387302s Jan 28 00:11:48.159: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.725087065s Jan 28 00:11:49.228: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.656380048s Jan 28 00:11:50.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.588074137s Jan 28 00:11:51.365: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.519231412s Jan 28 00:11:52.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.450932811s Jan 28 00:11:53.503: INFO: Verifying statefulset ss doesn't scale past 3 for another 382.111035ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8104 Jan 28 00:11:54.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:11:55.333: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 28 00:11:55.333: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 28 00:11:55.333: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 28 00:11:55.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:11:56.099: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 28 00:11:56.099: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 28 00:11:56.099: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 28 00:11:56.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:11:56.853: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 28 00:11:56.853: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 28 00:11:56.853: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 28 00:11:56.921: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 00:11:56.921: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 00:11:56.921: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Scale down will not halt with unhealthy stateful pod Jan 28 00:11:56.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 00:11:57.737: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 28 00:11:57.737: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 00:11:57.737: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 00:11:57.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 00:11:58.461: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 28 00:11:58.461: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 00:11:58.461: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 00:11:58.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 00:11:59.191: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 28 00:11:59.191: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 00:11:59.191: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 00:11:59.191: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 00:11:59.253: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 28 00:12:09.383: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 28 00:12:09.383: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 28 00:12:09.383: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 28 00:12:09.577: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 00:12:09.577: INFO: ss-0 capz-conf-mpgmr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC }] Jan 28 00:12:09.577: INFO: ss-1 capz-conf-x4p77 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC }] Jan 28 00:12:09.577: INFO: ss-2 capz-conf-x4p77 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC }] Jan 28 00:12:09.577: INFO: Jan 28 00:12:09.577: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 00:12:10.648: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 00:12:10.648: INFO: ss-0 capz-conf-mpgmr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC }] Jan 28 00:12:10.648: INFO: ss-1 capz-conf-x4p77 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC }] Jan 28 00:12:10.648: INFO: ss-2 capz-conf-x4p77 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC }] Jan 28 00:12:10.648: INFO: Jan 28 00:12:10.648: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 00:12:11.716: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 00:12:11.717: INFO: ss-0 capz-conf-mpgmr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC }] Jan 28 00:12:11.717: INFO: ss-1 capz-conf-x4p77 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC }] Jan 28 00:12:11.717: INFO: ss-2 capz-conf-x4p77 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC }] Jan 28 00:12:11.717: INFO: Jan 28 00:12:11.717: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 00:12:12.786: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 00:12:12.786: INFO: ss-0 capz-conf-mpgmr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC }] Jan 28 00:12:12.786: INFO: ss-1 capz-conf-x4p77 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC }] Jan 28 00:12:12.786: INFO: ss-2 capz-conf-x4p77 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC }] Jan 28 00:12:12.786: INFO: Jan 28 00:12:12.786: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 00:12:13.855: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 00:12:13.855: INFO: ss-0 capz-conf-mpgmr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC }] Jan 28 00:12:13.855: INFO: ss-1 capz-conf-x4p77 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC }] Jan 28 00:12:13.855: INFO: ss-2 capz-conf-x4p77 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:43 +0000 UTC }] Jan 28 00:12:13.855: INFO: Jan 28 00:12:13.855: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 00:12:14.919: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 00:12:14.919: INFO: ss-0 capz-conf-mpgmr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC }] Jan 28 00:12:14.919: INFO: Jan 28 00:12:14.919: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 28 00:12:15.984: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 00:12:15.984: INFO: ss-0 capz-conf-mpgmr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC }] Jan 28 00:12:15.984: INFO: Jan 28 00:12:15.985: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 28 00:12:17.049: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 00:12:17.049: INFO: ss-0 capz-conf-mpgmr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC }] Jan 28 00:12:17.049: INFO: Jan 28 00:12:17.049: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 28 00:12:18.114: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 00:12:18.114: INFO: ss-0 capz-conf-mpgmr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC }] Jan 28 00:12:18.114: INFO: Jan 28 00:12:18.114: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 28 00:12:19.179: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 00:12:19.179: INFO: ss-0 capz-conf-mpgmr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 00:11:32 +0000 UTC }] Jan 28 00:12:19.179: INFO: Jan 28 00:12:19.179: INFO: StatefulSet ss has not reached scale 0, at 1 �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8104 Jan 28 00:12:20.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:12:20.771: INFO: rc: 1 Jan 28 00:12:20.772: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 28 00:12:30.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:12:31.131: INFO: rc: 1 Jan 28 00:12:31.131: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:12:41.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:12:41.492: INFO: rc: 1 Jan 28 00:12:41.492: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:12:51.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:12:51.855: INFO: rc: 1 Jan 28 00:12:51.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:13:01.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:13:02.213: INFO: rc: 1 Jan 28 00:13:02.213: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:13:12.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:13:12.574: INFO: rc: 1 Jan 28 00:13:12.574: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:13:22.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:13:22.934: INFO: rc: 1 Jan 28 00:13:22.934: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:13:32.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:13:33.279: INFO: rc: 1 Jan 28 00:13:33.279: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:13:43.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:13:43.626: INFO: rc: 1 Jan 28 00:13:43.626: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:13:53.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:13:53.974: INFO: rc: 1 Jan 28 00:13:53.974: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:14:03.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:14:04.324: INFO: rc: 1 Jan 28 00:14:04.324: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:14:14.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:14:14.676: INFO: rc: 1 Jan 28 00:14:14.676: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:14:24.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:14:25.025: INFO: rc: 1 Jan 28 00:14:25.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:14:35.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:14:35.368: INFO: rc: 1 Jan 28 00:14:35.368: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:14:45.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:14:45.714: INFO: rc: 1 Jan 28 00:14:45.714: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:14:55.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:14:56.064: INFO: rc: 1 Jan 28 00:14:56.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:15:06.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:15:06.424: INFO: rc: 1 Jan 28 00:15:06.424: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:15:16.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:15:16.788: INFO: rc: 1 Jan 28 00:15:16.788: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:15:26.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:15:27.143: INFO: rc: 1 Jan 28 00:15:27.143: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:15:37.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:15:37.496: INFO: rc: 1 Jan 28 00:15:37.496: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:15:47.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:15:47.863: INFO: rc: 1 Jan 28 00:15:47.863: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:15:57.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:15:58.211: INFO: rc: 1 Jan 28 00:15:58.212: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:16:08.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:16:08.559: INFO: rc: 1 Jan 28 00:16:08.559: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:16:18.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:16:18.913: INFO: rc: 1 Jan 28 00:16:18.913: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:16:28.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:16:29.259: INFO: rc: 1 Jan 28 00:16:29.259: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:16:39.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:16:39.622: INFO: rc: 1 Jan 28 00:16:39.623: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:16:49.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:16:49.972: INFO: rc: 1 Jan 28 00:16:49.972: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:16:59.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:17:00.324: INFO: rc: 1 Jan 28 00:17:00.324: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:17:10.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:17:10.670: INFO: rc: 1 Jan 28 00:17:10.670: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 00:17:20.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8104 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:17:21.021: INFO: rc: 1 Jan 28 00:17:21.021: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 28 00:17:21.021: INFO: Scaling statefulset ss to 0 Jan 28 00:17:21.372: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 28 00:17:21.433: INFO: Deleting all statefulset in ns statefulset-8104 Jan 28 00:17:21.494: INFO: Scaling statefulset ss to 0 Jan 28 00:17:21.678: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 00:17:21.739: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:188 Jan 28 00:17:21.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8104" for this suite. �[32m• [SLOW TEST:350.417 seconds]�[0m [sig-apps] StatefulSet �[90mtest/e2e/apps/framework.go:23�[0m Basic StatefulSet functionality [StatefulSetBasic] �[90mtest/e2e/apps/statefulset.go:101�[0m Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":61,"completed":9,"skipped":816,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould patch a Namespace [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:17:22.092: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename namespaces �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should patch a Namespace [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating a Namespace �[1mSTEP�[0m: patching the Namespace �[1mSTEP�[0m: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:188 Jan 28 00:17:22.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "namespaces-6725" for this suite. �[1mSTEP�[0m: Destroying namespace "nspatchtest-a928b04e-fe4a-4fb4-a774-7d42c0063879-5196" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":61,"completed":10,"skipped":835,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete RS created by deployment when not orphaning [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:17:23.054: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: Gathering metrics Jan 28 00:17:24.174: INFO: The status of Pod kube-controller-manager-capz-conf-cdfcgm-control-plane-t22kx is Running (Ready = true) Jan 28 00:17:24.693: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 28 00:17:24.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-376" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":61,"completed":11,"skipped":863,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould ensure that all services are removed when a namespace is deleted [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:17:24.829: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename namespaces �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a test namespace �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Creating a service in the namespace �[1mSTEP�[0m: Deleting the namespace �[1mSTEP�[0m: Waiting for the namespace to be removed. �[1mSTEP�[0m: Recreating the namespace �[1mSTEP�[0m: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:188 Jan 28 00:17:32.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "namespaces-3659" for this suite. �[1mSTEP�[0m: Destroying namespace "nsdeletetest-2512" for this suite. Jan 28 00:17:32.213: INFO: Namespace nsdeletetest-2512 was already deleted �[1mSTEP�[0m: Destroying namespace "nsdeletetest-4296" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":61,"completed":12,"skipped":951,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Kubelet [Slow]�[0m �[90mkubelet GMSA support�[0m �[0mwhen creating a pod with correct GMSA credential specs�[0m �[1mpasses the credential specs down to the Pod's containers�[0m �[37mtest/e2e/windows/gmsa_kubelet.go:45�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:17:32.280: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gmsa-kubelet-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] passes the credential specs down to the Pod's containers test/e2e/windows/gmsa_kubelet.go:45 �[1mSTEP�[0m: creating a pod with correct GMSA specs Jan 28 00:17:32.840: INFO: The status of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 28 00:17:34.904: INFO: The status of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 28 00:17:36.905: INFO: The status of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 28 00:17:38.904: INFO: The status of Pod with-correct-gmsa-specs is Running (Ready = true) �[1mSTEP�[0m: checking the domain reported by nltest in the containers Jan 28 00:17:38.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-9034 exec --namespace=gmsa-kubelet-test-windows-9034 with-correct-gmsa-specs --container=container1 -- nltest /PARENTDOMAIN' Jan 28 00:17:39.733: INFO: stderr: "" Jan 28 00:17:39.733: INFO: stdout: "acme.com. (1)\r\nThe command completed successfully\r\n" Jan 28 00:17:39.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-9034 exec --namespace=gmsa-kubelet-test-windows-9034 with-correct-gmsa-specs --container=container2 -- nltest /PARENTDOMAIN' Jan 28 00:17:40.472: INFO: stderr: "" Jan 28 00:17:40.472: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n" [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:188 Jan 28 00:17:40.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gmsa-kubelet-test-windows-9034" for this suite. �[32m•�[0m{"msg":"PASSED [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] kubelet GMSA support when creating a pod with correct GMSA credential specs passes the credential specs down to the Pod's containers","total":61,"completed":13,"skipped":968,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould succeed in writing subpaths in container [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:17:40.615: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: creating a file in subpath Jan 28 00:17:55.301: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1398 PodName:var-expansion-a9701b45-b22a-4718-9e4c-bb190b47672d ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 28 00:17:55.301: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 28 00:17:55.302: INFO: ExecWithOptions: Clientset creation Jan 28 00:17:55.302: INFO: ExecWithOptions: execute(POST https://capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-1398/pods/var-expansion-a9701b45-b22a-4718-9e4c-bb190b47672d/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP�[0m: test for file in mounted path Jan 28 00:17:55.839: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1398 PodName:var-expansion-a9701b45-b22a-4718-9e4c-bb190b47672d ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 28 00:17:55.839: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 28 00:17:55.840: INFO: ExecWithOptions: Clientset creation Jan 28 00:17:55.840: INFO: ExecWithOptions: execute(POST https://capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-1398/pods/var-expansion-a9701b45-b22a-4718-9e4c-bb190b47672d/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP�[0m: updating the annotation value Jan 28 00:17:56.916: INFO: Successfully updated pod "var-expansion-a9701b45-b22a-4718-9e4c-bb190b47672d" �[1mSTEP�[0m: waiting for annotated pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 28 00:17:56.978: INFO: Deleting pod "var-expansion-a9701b45-b22a-4718-9e4c-bb190b47672d" in namespace "var-expansion-1398" Jan 28 00:17:57.043: INFO: Wait up to 5m0s for pod "var-expansion-a9701b45-b22a-4718-9e4c-bb190b47672d" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 28 00:18:01.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-1398" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":61,"completed":14,"skipped":1458,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that NodeSelector is respected if matching [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:18:01.306: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-pred �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 28 00:18:01.736: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 28 00:18:01.876: INFO: Waiting for terminating namespaces to be deleted... Jan 28 00:18:01.939: INFO: Logging pods the apiserver thinks is on node capz-conf-mpgmr before test Jan 28 00:18:02.012: INFO: calico-node-windows-pkjkv from calico-system started at 2023-01-27 23:28:48 +0000 UTC (2 container statuses recorded) Jan 28 00:18:02.012: INFO: Container calico-node-felix ready: true, restart count 1 Jan 28 00:18:02.012: INFO: Container calico-node-startup ready: true, restart count 0 Jan 28 00:18:02.012: INFO: containerd-logger-7b895 from kube-system started at 2023-01-27 23:28:48 +0000 UTC (1 container statuses recorded) Jan 28 00:18:02.012: INFO: Container containerd-logger ready: true, restart count 0 Jan 28 00:18:02.012: INFO: csi-azuredisk-node-win-p4gmt from kube-system started at 2023-01-27 23:29:18 +0000 UTC (3 container statuses recorded) Jan 28 00:18:02.012: INFO: Container azuredisk ready: true, restart count 0 Jan 28 00:18:02.012: INFO: Container liveness-probe ready: true, restart count 0 Jan 28 00:18:02.012: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 28 00:18:02.012: INFO: csi-proxy-mpvp5 from kube-system started at 2023-01-27 23:29:18 +0000 UTC (1 container statuses recorded) Jan 28 00:18:02.012: INFO: Container csi-proxy ready: true, restart count 0 Jan 28 00:18:02.012: INFO: kube-proxy-windows-bd49q from kube-system started at 2023-01-27 23:28:48 +0000 UTC (1 container statuses recorded) Jan 28 00:18:02.012: INFO: Container kube-proxy ready: true, restart count 0 Jan 28 00:18:02.012: INFO: Logging pods the apiserver thinks is on node capz-conf-x4p77 before test Jan 28 00:18:02.086: INFO: calico-node-windows-n6ccv from calico-system started at 2023-01-27 23:28:51 +0000 UTC (2 container statuses recorded) Jan 28 00:18:02.086: INFO: Container calico-node-felix ready: true, restart count 0 Jan 28 00:18:02.086: INFO: Container calico-node-startup ready: true, restart count 0 Jan 28 00:18:02.086: INFO: containerd-logger-bqsnj from kube-system started at 2023-01-27 23:28:51 +0000 UTC (1 container statuses recorded) Jan 28 00:18:02.086: INFO: Container containerd-logger ready: true, restart count 0 Jan 28 00:18:02.086: INFO: csi-azuredisk-node-win-qcbvj from kube-system started at 2023-01-27 23:29:22 +0000 UTC (3 container statuses recorded) Jan 28 00:18:02.086: INFO: Container azuredisk ready: true, restart count 0 Jan 28 00:18:02.086: INFO: Container liveness-probe ready: true, restart count 0 Jan 28 00:18:02.086: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 28 00:18:02.086: INFO: csi-proxy-rhfls from kube-system started at 2023-01-27 23:29:22 +0000 UTC (1 container statuses recorded) Jan 28 00:18:02.086: INFO: Container csi-proxy ready: true, restart count 0 Jan 28 00:18:02.086: INFO: kube-proxy-windows-98j6m from kube-system started at 2023-01-27 23:28:51 +0000 UTC (1 container statuses recorded) Jan 28 00:18:02.086: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. �[1mSTEP�[0m: Trying to apply a random label on the found node. �[1mSTEP�[0m: verifying the node has the label kubernetes.io/e2e-bf139cb7-b763-404c-8cba-4ef75888f8f0 42 �[1mSTEP�[0m: Trying to relaunch the pod, now with labels. �[1mSTEP�[0m: removing the label kubernetes.io/e2e-bf139cb7-b763-404c-8cba-4ef75888f8f0 off the node capz-conf-mpgmr �[1mSTEP�[0m: verifying the node doesn't have the label kubernetes.io/e2e-bf139cb7-b763-404c-8cba-4ef75888f8f0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 28 00:18:12.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-pred-5549" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":61,"completed":15,"skipped":1501,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] ReplicationController�[0m �[1mShould scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:64�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:18:13.145: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:64 �[1mSTEP�[0m: Running consuming RC rc via /v1, Kind=ReplicationController with 5 replicas �[1mSTEP�[0m: creating replication controller rc in namespace horizontal-pod-autoscaling-4264 I0128 00:18:13.717101 14 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-4264, replica count: 5 �[1mSTEP�[0m: Running controller I0128 00:18:23.818537 14 runners.go:193] rc Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-4264 I0128 00:18:23.957672 14 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-4264, replica count: 1 I0128 00:18:34.059778 14 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 28 00:18:39.059: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Jan 28 00:18:39.121: INFO: RC rc: consume 325 millicores in total Jan 28 00:18:39.121: INFO: RC rc: setting consumption to 325 millicores in total Jan 28 00:18:39.121: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:18:39.121: INFO: RC rc: consume 0 MB in total Jan 28 00:18:39.121: INFO: RC rc: setting consumption to 0 MB in total Jan 28 00:18:39.121: INFO: RC rc: sending request to consume 0 MB Jan 28 00:18:39.121: INFO: RC rc: consume custom metric 0 in total Jan 28 00:18:39.121: INFO: RC rc: setting bump of metric QPS to 0 in total Jan 28 00:18:39.121: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:18:39.121: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:18:39.121: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:18:39.121: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:18:39.296: INFO: waiting for 3 replicas (current: 5) Jan 28 00:18:59.358: INFO: waiting for 3 replicas (current: 5) Jan 28 00:19:09.242: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:19:09.242: INFO: RC rc: sending request to consume 0 MB Jan 28 00:19:09.242: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:19:09.242: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:19:12.229: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:19:12.230: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:19:19.358: INFO: waiting for 3 replicas (current: 5) Jan 28 00:19:39.306: INFO: RC rc: sending request to consume 0 MB Jan 28 00:19:39.307: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:19:39.307: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:19:39.307: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:19:39.358: INFO: waiting for 3 replicas (current: 5) Jan 28 00:19:42.302: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:19:42.302: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:19:59.359: INFO: waiting for 3 replicas (current: 5) Jan 28 00:20:09.370: INFO: RC rc: sending request to consume 0 MB Jan 28 00:20:09.370: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:20:09.370: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:20:09.370: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:20:12.370: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:20:12.370: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:20:19.358: INFO: waiting for 3 replicas (current: 5) Jan 28 00:20:39.359: INFO: waiting for 3 replicas (current: 5) Jan 28 00:20:39.434: INFO: RC rc: sending request to consume 0 MB Jan 28 00:20:39.434: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:20:39.434: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:20:39.434: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:20:42.441: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:20:42.441: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:20:59.358: INFO: waiting for 3 replicas (current: 5) Jan 28 00:21:09.499: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:21:09.499: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:21:09.499: INFO: RC rc: sending request to consume 0 MB Jan 28 00:21:09.499: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:21:12.514: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:21:12.514: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:21:19.359: INFO: waiting for 3 replicas (current: 5) Jan 28 00:21:39.358: INFO: waiting for 3 replicas (current: 5) Jan 28 00:21:39.564: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:21:39.564: INFO: RC rc: sending request to consume 0 MB Jan 28 00:21:39.564: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:21:39.564: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:21:42.589: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:21:42.589: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:21:59.359: INFO: waiting for 3 replicas (current: 5) Jan 28 00:22:09.627: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:22:09.627: INFO: RC rc: sending request to consume 0 MB Jan 28 00:22:09.627: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:22:09.628: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:22:12.660: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:22:12.660: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:22:19.359: INFO: waiting for 3 replicas (current: 5) Jan 28 00:22:39.359: INFO: waiting for 3 replicas (current: 5) Jan 28 00:22:39.691: INFO: RC rc: sending request to consume 0 MB Jan 28 00:22:39.691: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:22:39.692: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:22:39.692: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:22:42.729: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:22:42.729: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:22:59.359: INFO: waiting for 3 replicas (current: 5) Jan 28 00:23:09.757: INFO: RC rc: sending request to consume 0 MB Jan 28 00:23:09.757: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:23:09.757: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:23:09.757: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:23:12.799: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:23:12.799: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:23:19.359: INFO: waiting for 3 replicas (current: 5) Jan 28 00:23:39.358: INFO: waiting for 3 replicas (current: 5) Jan 28 00:23:39.823: INFO: RC rc: sending request to consume 0 MB Jan 28 00:23:39.823: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:23:39.823: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:23:39.823: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:23:42.871: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:23:42.871: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:23:59.358: INFO: waiting for 3 replicas (current: 3) Jan 28 00:23:59.420: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:23:59.481: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9a7f4} Jan 28 00:24:09.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:24:09.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9aa5c} Jan 28 00:24:09.888: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:24:09.888: INFO: RC rc: sending request to consume 0 MB Jan 28 00:24:09.888: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:24:09.888: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:24:12.943: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:24:12.943: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:24:19.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:24:19.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a58c} Jan 28 00:24:29.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:24:29.604: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a664} Jan 28 00:24:39.542: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:24:39.603: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a794} Jan 28 00:24:39.952: INFO: RC rc: sending request to consume 0 MB Jan 28 00:24:39.952: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:24:39.952: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:24:39.952: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:24:43.014: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:24:43.014: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:24:49.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:24:49.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e6b24} Jan 28 00:24:59.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:24:59.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e6d1c} Jan 28 00:25:09.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:25:09.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e6f3c} Jan 28 00:25:10.016: INFO: RC rc: sending request to consume 0 MB Jan 28 00:25:10.016: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:25:10.016: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:25:10.016: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:25:13.086: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:25:13.087: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:25:19.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:25:19.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a24c} Jan 28 00:25:29.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:25:29.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e629c} Jan 28 00:25:39.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:25:39.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9a20c} Jan 28 00:25:40.080: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:25:40.080: INFO: RC rc: sending request to consume 0 MB Jan 28 00:25:40.080: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:25:40.080: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:25:43.157: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:25:43.158: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:25:49.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:25:49.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a574} Jan 28 00:25:59.546: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:25:59.607: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e653c} Jan 28 00:26:09.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:26:09.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e671c} Jan 28 00:26:10.144: INFO: RC rc: sending request to consume 0 MB Jan 28 00:26:10.144: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:26:10.144: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:26:10.144: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:26:13.227: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:26:13.228: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:26:19.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:26:19.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e6bcc} Jan 28 00:26:29.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:26:29.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9abac} Jan 28 00:26:39.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:26:39.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e6e3c} Jan 28 00:26:40.209: INFO: RC rc: sending request to consume 0 MB Jan 28 00:26:40.209: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:26:40.209: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:26:40.209: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:26:43.298: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:26:43.298: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:26:49.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:26:49.610: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e73d4} Jan 28 00:26:59.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:26:59.607: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e748c} Jan 28 00:27:09.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:27:09.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9af3c} Jan 28 00:27:10.272: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:27:10.272: INFO: RC rc: sending request to consume 0 MB Jan 28 00:27:10.272: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:27:10.272: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:27:13.367: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:27:13.367: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:27:19.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:27:19.607: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a24c} Jan 28 00:27:29.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:27:29.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a43c} Jan 28 00:27:39.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:27:39.604: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9a1ac} Jan 28 00:27:40.335: INFO: RC rc: sending request to consume 0 MB Jan 28 00:27:40.336: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:27:40.336: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:27:40.336: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:27:43.436: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:27:43.436: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:27:49.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:27:49.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0033de094} Jan 28 00:27:59.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:27:59.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0033de14c} Jan 28 00:28:09.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:28:09.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0033de21c} Jan 28 00:28:10.400: INFO: RC rc: sending request to consume 0 MB Jan 28 00:28:10.400: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:28:10.400: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:28:10.400: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:28:13.506: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:28:13.506: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:28:19.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:28:19.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e687c} Jan 28 00:28:29.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:28:29.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e6934} Jan 28 00:28:39.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:28:39.604: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e6a04} Jan 28 00:28:40.463: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:28:40.463: INFO: RC rc: sending request to consume 0 MB Jan 28 00:28:40.463: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:28:40.463: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:28:43.575: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:28:43.575: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:28:49.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:28:49.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e6f94} Jan 28 00:28:59.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:28:59.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0033dea1c} Jan 28 00:29:09.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:29:09.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e721c} Jan 28 00:29:10.527: INFO: RC rc: sending request to consume 0 MB Jan 28 00:29:10.527: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:29:10.527: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:29:10.527: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:29:13.645: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:29:13.647: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:29:19.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:29:19.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0033de1ac} Jan 28 00:29:29.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:29:29.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a184} Jan 28 00:29:39.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:29:39.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e62cc} Jan 28 00:29:40.591: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:29:40.591: INFO: RC rc: sending request to consume 0 MB Jan 28 00:29:40.592: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:29:40.592: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:29:43.716: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:29:43.716: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:29:49.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:29:49.604: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9a2ac} Jan 28 00:29:59.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:29:59.607: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a56c} Jan 28 00:30:09.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:30:09.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a7cc} Jan 28 00:30:10.655: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:30:10.655: INFO: RC rc: sending request to consume 0 MB Jan 28 00:30:10.655: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:30:10.655: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:30:13.786: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:30:13.786: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:30:19.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:30:19.604: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9a93c} Jan 28 00:30:29.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:30:29.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346ab1c} Jan 28 00:30:39.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:30:39.604: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346abec} Jan 28 00:30:40.719: INFO: RC rc: sending request to consume 0 MB Jan 28 00:30:40.719: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:30:40.719: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:30:40.719: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:30:43.857: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:30:43.857: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:30:49.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:30:49.604: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346af14} Jan 28 00:30:59.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:30:59.604: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346afdc} Jan 28 00:31:09.549: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:31:09.610: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346b1dc} Jan 28 00:31:10.783: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:31:10.783: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:31:10.783: INFO: RC rc: sending request to consume 0 MB Jan 28 00:31:10.784: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:31:13.928: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:31:13.928: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:31:19.546: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:31:19.607: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e61ac} Jan 28 00:31:29.586: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:31:29.647: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a10c} Jan 28 00:31:39.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:31:39.604: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a1c4} Jan 28 00:31:40.848: INFO: RC rc: sending request to consume 0 MB Jan 28 00:31:40.848: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:31:40.848: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:31:40.848: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:31:43.997: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:31:43.998: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:31:49.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:31:49.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e6664} Jan 28 00:31:59.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:31:59.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e679c} Jan 28 00:32:09.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:32:09.608: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346a654} Jan 28 00:32:10.912: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:32:10.912: INFO: RC rc: sending request to consume 0 MB Jan 28 00:32:10.912: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:32:10.912: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:32:14.067: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:32:14.067: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:32:19.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:32:19.604: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0033de85c} Jan 28 00:32:29.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:32:29.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e6d7c} Jan 28 00:32:39.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:32:39.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0033deabc} Jan 28 00:32:40.976: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:32:40.976: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:32:40.976: INFO: RC rc: sending request to consume 0 MB Jan 28 00:32:40.976: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:32:44.135: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:32:44.135: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:32:49.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:32:49.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346abbc} Jan 28 00:32:59.544: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:32:59.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0035e739c} Jan 28 00:33:09.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:33:09.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00346ae1c} Jan 28 00:33:11.040: INFO: RC rc: sending request to consume 0 MB Jan 28 00:33:11.040: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:33:11.040: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:33:11.041: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:33:14.208: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:33:14.208: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:33:19.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:33:19.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9a1f4} Jan 28 00:33:29.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:33:29.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9a2ac} Jan 28 00:33:39.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:33:39.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9a4ac} Jan 28 00:33:41.104: INFO: RC rc: sending request to consume 0 MB Jan 28 00:33:41.104: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:33:41.104: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:33:41.104: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:33:44.277: INFO: RC rc: sending request to consume 325 millicores Jan 28 00:33:44.277: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 00:33:49.543: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:33:49.605: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9a80c} Jan 28 00:33:59.545: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:33:59.606: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9aa0c} Jan 28 00:33:59.668: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 28 00:33:59.729: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-28 00:23:54 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002b9ac7c} Jan 28 00:33:59.729: INFO: Number of replicas was stable over 10m0s Jan 28 00:33:59.729: INFO: RC rc: consume 10 millicores in total Jan 28 00:33:59.729: INFO: RC rc: setting consumption to 10 millicores in total Jan 28 00:33:59.790: INFO: waiting for 1 replicas (current: 3) Jan 28 00:34:11.168: INFO: RC rc: sending request to consume 0 MB Jan 28 00:34:11.168: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:34:11.168: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:34:11.168: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:34:14.354: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:34:14.354: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:34:19.853: INFO: waiting for 1 replicas (current: 3) Jan 28 00:34:39.854: INFO: waiting for 1 replicas (current: 3) Jan 28 00:34:41.231: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:34:41.231: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:34:41.231: INFO: RC rc: sending request to consume 0 MB Jan 28 00:34:41.231: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:34:44.421: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:34:44.421: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:34:59.854: INFO: waiting for 1 replicas (current: 3) Jan 28 00:35:11.296: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:35:11.297: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:35:11.297: INFO: RC rc: sending request to consume 0 MB Jan 28 00:35:11.297: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:35:14.491: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:35:14.491: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:35:19.855: INFO: waiting for 1 replicas (current: 3) Jan 28 00:35:39.853: INFO: waiting for 1 replicas (current: 3) Jan 28 00:35:41.362: INFO: RC rc: sending request to consume 0 MB Jan 28 00:35:41.362: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:35:41.362: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:35:41.362: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:35:45.112: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:35:45.112: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:35:59.855: INFO: waiting for 1 replicas (current: 3) Jan 28 00:36:11.426: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:36:11.426: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:36:11.426: INFO: RC rc: sending request to consume 0 MB Jan 28 00:36:11.427: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:36:15.181: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:36:15.181: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:36:19.854: INFO: waiting for 1 replicas (current: 3) Jan 28 00:36:39.854: INFO: waiting for 1 replicas (current: 3) Jan 28 00:36:41.491: INFO: RC rc: sending request to consume 0 MB Jan 28 00:36:41.491: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:36:41.491: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:36:41.491: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:36:45.250: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:36:45.250: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:36:59.853: INFO: waiting for 1 replicas (current: 3) Jan 28 00:37:11.555: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:37:11.555: INFO: RC rc: sending request to consume 0 MB Jan 28 00:37:11.555: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:37:11.555: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:37:15.321: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:37:15.321: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:37:19.855: INFO: waiting for 1 replicas (current: 3) Jan 28 00:37:39.859: INFO: waiting for 1 replicas (current: 3) Jan 28 00:37:41.620: INFO: RC rc: sending request to consume 0 MB Jan 28 00:37:41.620: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:37:41.620: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:37:41.620: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:37:45.388: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:37:45.388: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:37:59.859: INFO: waiting for 1 replicas (current: 3) Jan 28 00:38:11.686: INFO: RC rc: sending request to consume 0 MB Jan 28 00:38:11.686: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:38:11.686: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:38:11.687: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:38:15.458: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:38:15.458: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:38:19.856: INFO: waiting for 1 replicas (current: 3) Jan 28 00:38:39.862: INFO: waiting for 1 replicas (current: 3) Jan 28 00:38:41.751: INFO: RC rc: sending request to consume 0 MB Jan 28 00:38:41.751: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:38:41.751: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:38:41.752: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:38:45.528: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:38:45.528: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:38:59.854: INFO: waiting for 1 replicas (current: 3) Jan 28 00:39:11.816: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 28 00:39:11.816: INFO: RC rc: sending request to consume 0 MB Jan 28 00:39:11.816: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 00:39:11.816: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 00:39:15.597: INFO: RC rc: sending request to consume 10 millicores Jan 28 00:39:15.598: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4264/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 00:39:19.856: INFO: waiting for 1 replicas (current: 1) �[1mSTEP�[0m: Removing consuming RC rc Jan 28 00:39:19.922: INFO: RC rc: stopping metric consumer Jan 28 00:39:19.922: INFO: RC rc: stopping CPU consumer Jan 28 00:39:19.922: INFO: RC rc: stopping mem consumer �[1mSTEP�[0m: deleting ReplicationController rc in namespace horizontal-pod-autoscaling-4264, will wait for the garbage collector to delete the pods Jan 28 00:39:30.152: INFO: Deleting ReplicationController rc took: 64.599123ms Jan 28 00:39:30.253: INFO: Terminating ReplicationController rc pods took: 100.332569ms �[1mSTEP�[0m: deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-4264, will wait for the garbage collector to delete the pods Jan 28 00:39:32.361: INFO: Deleting ReplicationController rc-ctrl took: 63.523706ms Jan 28 00:39:32.462: INFO: Terminating ReplicationController rc-ctrl pods took: 101.026157ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 28 00:39:34.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-4264" for this suite. �[32m• [SLOW TEST:1281.133 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicationController �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:59�[0m Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:64�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","total":61,"completed":16,"skipped":2052,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould fail substituting values in a volume subpath with backticks [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:39:34.280: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/framework/framework.go:652 Jan 28 00:39:38.900: INFO: Deleting pod "var-expansion-2bdb7055-9cfc-4413-ad8b-7a6e4e00534d" in namespace "var-expansion-3738" Jan 28 00:39:38.969: INFO: Wait up to 5m0s for pod "var-expansion-2bdb7055-9cfc-4413-ad8b-7a6e4e00534d" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 28 00:39:41.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-3738" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":61,"completed":17,"skipped":2086,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould update pod when spec was updated and update strategy is RollingUpdate [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:39:41.236: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/framework/framework.go:652 Jan 28 00:39:41.929: INFO: Creating simple daemon set daemon-set �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 28 00:39:42.063: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:42.127: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:39:42.127: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:39:43.197: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:43.261: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:39:43.261: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:39:44.197: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:44.261: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:39:44.261: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:39:45.197: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:45.261: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:39:45.261: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:39:46.198: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:46.262: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 28 00:39:46.262: INFO: Node capz-conf-x4p77 is running 0 daemon pod, expected 1 Jan 28 00:39:47.197: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:47.261: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 28 00:39:47.261: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP�[0m: Update daemon pods image. �[1mSTEP�[0m: Check that daemon pods images are updated. Jan 28 00:39:47.708: INFO: Wrong image for pod: daemon-set-97fr4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 28 00:39:47.778: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:48.842: INFO: Wrong image for pod: daemon-set-97fr4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 28 00:39:48.912: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:49.843: INFO: Wrong image for pod: daemon-set-97fr4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 28 00:39:49.913: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:50.842: INFO: Wrong image for pod: daemon-set-97fr4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 28 00:39:50.964: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:51.842: INFO: Wrong image for pod: daemon-set-97fr4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 28 00:39:51.913: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:52.842: INFO: Pod daemon-set-5hprm is not available Jan 28 00:39:52.842: INFO: Wrong image for pod: daemon-set-97fr4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 28 00:39:52.912: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:53.842: INFO: Pod daemon-set-5hprm is not available Jan 28 00:39:53.842: INFO: Wrong image for pod: daemon-set-97fr4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 28 00:39:53.913: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:54.843: INFO: Pod daemon-set-5hprm is not available Jan 28 00:39:54.843: INFO: Wrong image for pod: daemon-set-97fr4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 28 00:39:54.913: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:55.842: INFO: Pod daemon-set-5hprm is not available Jan 28 00:39:55.842: INFO: Wrong image for pod: daemon-set-97fr4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 28 00:39:55.912: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:56.916: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:57.913: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:58.912: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:39:59.843: INFO: Pod daemon-set-h7gxx is not available Jan 28 00:39:59.917: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node �[1mSTEP�[0m: Check that daemon pods are still running on every node of the cluster. Jan 28 00:39:59.988: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:40:00.051: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 28 00:40:00.051: INFO: Node capz-conf-x4p77 is running 0 daemon pod, expected 1 Jan 28 00:40:01.121: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:40:01.185: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 28 00:40:01.185: INFO: Node capz-conf-x4p77 is running 0 daemon pod, expected 1 Jan 28 00:40:02.122: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:40:02.186: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 28 00:40:02.186: INFO: Node capz-conf-x4p77 is running 0 daemon pod, expected 1 Jan 28 00:40:03.122: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:40:03.185: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 28 00:40:03.185: INFO: Node capz-conf-x4p77 is running 0 daemon pod, expected 1 Jan 28 00:40:04.122: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:40:04.185: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 28 00:40:04.185: INFO: Node capz-conf-x4p77 is running 0 daemon pod, expected 1 Jan 28 00:40:05.122: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:40:05.186: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 28 00:40:05.186: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-595, will wait for the garbage collector to delete the pods Jan 28 00:40:05.725: INFO: Deleting DaemonSet.extensions daemon-set took: 64.799669ms Jan 28 00:40:05.826: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.91831ms Jan 28 00:40:08.988: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:40:08.988: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 28 00:40:09.050: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"16675"},"items":null} Jan 28 00:40:09.110: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"16675"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 28 00:40:09.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-595" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":61,"completed":18,"skipped":2344,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:40:09.444: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics Jan 28 00:40:10.707: INFO: The status of Pod kube-controller-manager-capz-conf-cdfcgm-control-plane-t22kx is Running (Ready = true) Jan 28 00:40:11.248: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 28 00:40:11.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-7916" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":61,"completed":19,"skipped":2455,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould ensure that all pods are removed when a namespace is deleted [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:40:11.383: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename namespaces �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a test namespace �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Creating a pod in the namespace �[1mSTEP�[0m: Waiting for the pod to have running status �[1mSTEP�[0m: Deleting the namespace �[1mSTEP�[0m: Waiting for the namespace to be removed. �[1mSTEP�[0m: Recreating the namespace �[1mSTEP�[0m: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:188 Jan 28 00:40:29.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "namespaces-5485" for this suite. �[1mSTEP�[0m: Destroying namespace "nsdeletetest-3797" for this suite. Jan 28 00:40:29.877: INFO: Namespace nsdeletetest-3797 was already deleted �[1mSTEP�[0m: Destroying namespace "nsdeletetest-5595" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":61,"completed":20,"skipped":2487,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan pods created by rc if deleteOptions.OrphanDependents is nil�[0m �[37mtest/e2e/apimachinery/garbage_collector.go:439�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:40:29.943: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan pods created by rc if deleteOptions.OrphanDependents is nil test/e2e/apimachinery/garbage_collector.go:439 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Jan 28 00:41:05.881: INFO: The status of Pod kube-controller-manager-capz-conf-cdfcgm-control-plane-t22kx is Running (Ready = true) Jan 28 00:41:06.423: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 28 00:41:06.423: INFO: Deleting pod "simpletest.rc-4kpw4" in namespace "gc-9957" Jan 28 00:41:06.494: INFO: Deleting pod "simpletest.rc-ts666" in namespace "gc-9957" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 28 00:41:06.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-9957" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":61,"completed":21,"skipped":2561,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir wrapper volumes�[0m �[1mshould not cause race condition when used for configmaps [Serial] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:41:06.705: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating 50 configmaps �[1mSTEP�[0m: Creating RC which spawns configmap-volume pods Jan 28 00:41:10.627: INFO: Pod name wrapped-volume-race-0c457398-536d-44c3-af99-6ce22b41b6c3: Found 5 pods out of 5 �[1mSTEP�[0m: Ensuring each pod is running �[1mSTEP�[0m: deleting ReplicationController wrapped-volume-race-0c457398-536d-44c3-af99-6ce22b41b6c3 in namespace emptydir-wrapper-656, will wait for the garbage collector to delete the pods Jan 28 00:41:31.282: INFO: Deleting ReplicationController wrapped-volume-race-0c457398-536d-44c3-af99-6ce22b41b6c3 took: 87.259175ms Jan 28 00:41:31.382: INFO: Terminating ReplicationController wrapped-volume-race-0c457398-536d-44c3-af99-6ce22b41b6c3 pods took: 100.182945ms �[1mSTEP�[0m: Creating RC which spawns configmap-volume pods Jan 28 00:41:35.609: INFO: Pod name wrapped-volume-race-5270591b-5dac-448b-8c84-a1e1705b5fcc: Found 5 pods out of 5 �[1mSTEP�[0m: Ensuring each pod is running �[1mSTEP�[0m: deleting ReplicationController wrapped-volume-race-5270591b-5dac-448b-8c84-a1e1705b5fcc in namespace emptydir-wrapper-656, will wait for the garbage collector to delete the pods Jan 28 00:41:56.251: INFO: Deleting ReplicationController wrapped-volume-race-5270591b-5dac-448b-8c84-a1e1705b5fcc took: 85.444388ms Jan 28 00:41:56.352: INFO: Terminating ReplicationController wrapped-volume-race-5270591b-5dac-448b-8c84-a1e1705b5fcc pods took: 100.950993ms �[1mSTEP�[0m: Creating RC which spawns configmap-volume pods Jan 28 00:42:00.578: INFO: Pod name wrapped-volume-race-3cbc015b-d2a7-4467-8792-5f258c176206: Found 5 pods out of 5 �[1mSTEP�[0m: Ensuring each pod is running �[1mSTEP�[0m: deleting ReplicationController wrapped-volume-race-3cbc015b-d2a7-4467-8792-5f258c176206 in namespace emptydir-wrapper-656, will wait for the garbage collector to delete the pods Jan 28 00:42:21.220: INFO: Deleting ReplicationController wrapped-volume-race-3cbc015b-d2a7-4467-8792-5f258c176206 took: 86.160816ms Jan 28 00:42:21.320: INFO: Terminating ReplicationController wrapped-volume-race-3cbc015b-d2a7-4467-8792-5f258c176206 pods took: 100.594304ms �[1mSTEP�[0m: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:188 Jan 28 00:42:29.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-656" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":61,"completed":22,"skipped":2635,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[90mPodTopologySpread Preemption�[0m �[1mvalidates proper pods are preempted�[0m �[37mtest/e2e/scheduling/preemption.go:355�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:42:29.537: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 28 00:42:30.157: INFO: Waiting up to 1m0s for all nodes to be ready Jan 28 00:43:30.627: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 �[1mSTEP�[0m: Trying to get 2 available nodes which can run pod �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. �[1mSTEP�[0m: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. �[1mSTEP�[0m: Apply 10 fake resource to node capz-conf-x4p77. �[1mSTEP�[0m: Apply 10 fake resource to node capz-conf-mpgmr. [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 �[1mSTEP�[0m: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. �[1mSTEP�[0m: Create 1 Medium Pod with TopologySpreadConstraints �[1mSTEP�[0m: Verify there are 3 Pods left in this namespace �[1mSTEP�[0m: Pod "high" is as expected to be running. �[1mSTEP�[0m: Pod "low-1" is as expected to be running. �[1mSTEP�[0m: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 �[1mSTEP�[0m: removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-x4p77 �[1mSTEP�[0m: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[1mSTEP�[0m: removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-mpgmr �[1mSTEP�[0m: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 28 00:44:21.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-6071" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":61,"completed":23,"skipped":2653,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] NoExecuteTaintManager Single Pod [Serial]�[0m �[1mremoving taint cancels eviction [Disruptive] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:44:22.018: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename taint-single-pod �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/node/taints.go:166 Jan 28 00:44:22.447: INFO: Waiting up to 1m0s for all nodes to be ready Jan 28 00:45:22.857: INFO: Waiting for terminating namespaces to be deleted... [It] removing taint cancels eviction [Disruptive] [Conformance] test/e2e/framework/framework.go:652 Jan 28 00:45:22.918: INFO: Starting informer... �[1mSTEP�[0m: Starting pod... Jan 28 00:45:23.044: INFO: Pod is running on capz-conf-mpgmr. Tainting Node �[1mSTEP�[0m: Trying to apply a taint on the Node �[1mSTEP�[0m: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[1mSTEP�[0m: Waiting short time to make sure Pod is queued for deletion Jan 28 00:45:23.240: INFO: Pod wasn't evicted. Proceeding Jan 28 00:45:23.240: INFO: Removing taint from Node �[1mSTEP�[0m: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[1mSTEP�[0m: Waiting some time to make sure that toleration time passed. Jan 28 00:46:38.432: INFO: Pod wasn't evicted. Test successful [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/framework/framework.go:188 Jan 28 00:46:38.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "taint-single-pod-8948" for this suite. �[32m• [SLOW TEST:136.545 seconds]�[0m [sig-node] NoExecuteTaintManager Single Pod [Serial] �[90mtest/e2e/node/framework.go:23�[0m removing taint cancels eviction [Disruptive] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":61,"completed":24,"skipped":2741,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow]�[0m �[90mattempt to deploy past allocatable memory limits�[0m �[1mshould fail deployments of pods once there isn't enough memory�[0m �[37mtest/e2e/windows/memory_limits.go:60�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:46:38.566: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename memory-limit-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Jan 28 00:46:39.319: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 28 00:46:39.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "memory-limit-test-windows-5385" for this suite. �[32m•�[0m{"msg":"PASSED [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits should fail deployments of pods once there isn't enough memory","total":61,"completed":25,"skipped":2777,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould have their auto-restart back-off timer reset on image update [Slow][NodeConformance]�[0m �[37mtest/e2e/common/node/pods.go:682�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:46:39.451: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:191 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] test/e2e/common/node/pods.go:682 Jan 28 00:46:40.006: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 28 00:46:42.069: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 28 00:46:44.068: INFO: The status of Pod pod-back-off-image is Running (Ready = true) �[1mSTEP�[0m: getting restart delay-0 Jan 28 00:48:38.305: INFO: getRestartDelay: restartCount = 4, finishedAt=2023-01-28 00:47:49 +0000 UTC restartedAt=2023-01-28 00:48:37 +0000 UTC (48s) �[1mSTEP�[0m: getting restart delay-1 Jan 28 00:50:18.178: INFO: getRestartDelay: restartCount = 5, finishedAt=2023-01-28 00:48:42 +0000 UTC restartedAt=2023-01-28 00:50:17 +0000 UTC (1m35s) �[1mSTEP�[0m: getting restart delay-2 Jan 28 00:53:13.463: INFO: getRestartDelay: restartCount = 6, finishedAt=2023-01-28 00:50:22 +0000 UTC restartedAt=2023-01-28 00:53:12 +0000 UTC (2m50s) �[1mSTEP�[0m: updating the image Jan 28 00:53:14.092: INFO: Successfully updated pod "pod-back-off-image" �[1mSTEP�[0m: get restart delay after image update Jan 28 00:53:42.213: INFO: getRestartDelay: restartCount = 8, finishedAt=2023-01-28 00:53:23 +0000 UTC restartedAt=2023-01-28 00:53:40 +0000 UTC (17s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:188 Jan 28 00:53:42.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-7184" for this suite. �[32m• [SLOW TEST:422.894 seconds]�[0m [sig-node] Pods �[90mtest/e2e/common/node/framework.go:23�[0m should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] �[90mtest/e2e/common/node/pods.go:682�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":61,"completed":26,"skipped":2829,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete pods created by rc when not orphaning [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:53:42.347: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for all pods to be garbage collected �[1mSTEP�[0m: Gathering metrics Jan 28 00:53:53.364: INFO: The status of Pod kube-controller-manager-capz-conf-cdfcgm-control-plane-t22kx is Running (Ready = true) Jan 28 00:53:53.901: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 28 00:53:53.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-7086" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":61,"completed":27,"skipped":2963,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow]�[0m �[90mGMSA support�[0m �[1mworks end to end�[0m �[37mtest/e2e/windows/gmsa_full.go:97�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:53:54.032: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gmsa-full-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works end to end test/e2e/windows/gmsa_full.go:97 �[1mSTEP�[0m: finding the worker node that fulfills this test's assumptions Jan 28 00:53:54.525: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 28 00:53:54.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gmsa-full-test-windows-8279" for this suite. �[36m�[1mS [SKIPPING] [0.624 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[90mtest/e2e/windows/framework.go:27�[0m GMSA support �[90mtest/e2e/windows/gmsa_full.go:96�[0m �[36m�[1mworks end to end [It]�[0m �[90mtest/e2e/windows/gmsa_full.go:97�[0m �[36mExpected to find exactly one node with the "agentpool=windowsgmsa" label, found 0�[0m test/e2e/windows/gmsa_full.go:103 �[90m------------------------------�[0m �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan pods created by rc if delete options say so [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:53:54.661: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Jan 28 00:54:35.978: INFO: The status of Pod kube-controller-manager-capz-conf-cdfcgm-control-plane-t22kx is Running (Ready = true) Jan 28 00:54:36.530: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 28 00:54:36.530: INFO: Deleting pod "simpletest.rc-2bc42" in namespace "gc-1877" Jan 28 00:54:36.609: INFO: Deleting pod "simpletest.rc-2lsn4" in namespace "gc-1877" Jan 28 00:54:36.681: INFO: Deleting pod "simpletest.rc-2twl8" in namespace "gc-1877" Jan 28 00:54:36.748: INFO: Deleting pod "simpletest.rc-48qpf" in namespace "gc-1877" Jan 28 00:54:36.820: INFO: Deleting pod "simpletest.rc-5d7d8" in namespace "gc-1877" Jan 28 00:54:36.888: INFO: Deleting pod "simpletest.rc-5m8ll" in namespace "gc-1877" Jan 28 00:54:36.956: INFO: Deleting pod "simpletest.rc-5mxgw" in namespace "gc-1877" Jan 28 00:54:37.030: INFO: Deleting pod "simpletest.rc-5twhq" in namespace "gc-1877" Jan 28 00:54:37.105: INFO: Deleting pod "simpletest.rc-6chqf" in namespace "gc-1877" Jan 28 00:54:37.177: INFO: Deleting pod "simpletest.rc-6mlgq" in namespace "gc-1877" Jan 28 00:54:37.246: INFO: Deleting pod "simpletest.rc-6rct9" in namespace "gc-1877" Jan 28 00:54:37.312: INFO: Deleting pod "simpletest.rc-6zg8x" in namespace "gc-1877" Jan 28 00:54:37.381: INFO: Deleting pod "simpletest.rc-75tfv" in namespace "gc-1877" Jan 28 00:54:37.453: INFO: Deleting pod "simpletest.rc-76fdr" in namespace "gc-1877" Jan 28 00:54:37.522: INFO: Deleting pod "simpletest.rc-7c4wd" in namespace "gc-1877" Jan 28 00:54:37.592: INFO: Deleting pod "simpletest.rc-7lffz" in namespace "gc-1877" Jan 28 00:54:37.665: INFO: Deleting pod "simpletest.rc-7pm92" in namespace "gc-1877" Jan 28 00:54:37.737: INFO: Deleting pod "simpletest.rc-84qjx" in namespace "gc-1877" Jan 28 00:54:37.805: INFO: Deleting pod "simpletest.rc-8dm7k" in namespace "gc-1877" Jan 28 00:54:37.875: INFO: Deleting pod "simpletest.rc-8hddb" in namespace "gc-1877" Jan 28 00:54:37.943: INFO: Deleting pod "simpletest.rc-8qpvg" in namespace "gc-1877" Jan 28 00:54:38.012: INFO: Deleting pod "simpletest.rc-8r442" in namespace "gc-1877" Jan 28 00:54:38.082: INFO: Deleting pod "simpletest.rc-8tl5l" in namespace "gc-1877" Jan 28 00:54:38.150: INFO: Deleting pod "simpletest.rc-92dcm" in namespace "gc-1877" Jan 28 00:54:38.221: INFO: Deleting pod "simpletest.rc-9frd6" in namespace "gc-1877" Jan 28 00:54:38.291: INFO: Deleting pod "simpletest.rc-9lfmd" in namespace "gc-1877" Jan 28 00:54:38.361: INFO: Deleting pod "simpletest.rc-9phz4" in namespace "gc-1877" Jan 28 00:54:38.429: INFO: Deleting pod "simpletest.rc-b5b9l" in namespace "gc-1877" Jan 28 00:54:38.495: INFO: Deleting pod "simpletest.rc-bdvtm" in namespace "gc-1877" Jan 28 00:54:38.615: INFO: Deleting pod "simpletest.rc-bk5sj" in namespace "gc-1877" Jan 28 00:54:38.686: INFO: Deleting pod "simpletest.rc-bpt6f" in namespace "gc-1877" Jan 28 00:54:38.756: INFO: Deleting pod "simpletest.rc-brtph" in namespace "gc-1877" Jan 28 00:54:38.824: INFO: Deleting pod "simpletest.rc-cbd6p" in namespace "gc-1877" Jan 28 00:54:38.894: INFO: Deleting pod "simpletest.rc-cvrsw" in namespace "gc-1877" Jan 28 00:54:38.976: INFO: Deleting pod "simpletest.rc-cw7zh" in namespace "gc-1877" Jan 28 00:54:39.050: INFO: Deleting pod "simpletest.rc-czvnb" in namespace "gc-1877" Jan 28 00:54:39.119: INFO: Deleting pod "simpletest.rc-d5mjb" in namespace "gc-1877" Jan 28 00:54:39.188: INFO: Deleting pod "simpletest.rc-dcxqx" in namespace "gc-1877" Jan 28 00:54:39.259: INFO: Deleting pod "simpletest.rc-ddnc8" in namespace "gc-1877" Jan 28 00:54:39.329: INFO: Deleting pod "simpletest.rc-fh6dr" in namespace "gc-1877" Jan 28 00:54:39.397: INFO: Deleting pod "simpletest.rc-fhrpw" in namespace "gc-1877" Jan 28 00:54:39.465: INFO: Deleting pod "simpletest.rc-g8m5j" in namespace "gc-1877" Jan 28 00:54:39.537: INFO: Deleting pod "simpletest.rc-gg2g7" in namespace "gc-1877" Jan 28 00:54:39.604: INFO: Deleting pod "simpletest.rc-gtssc" in namespace "gc-1877" Jan 28 00:54:39.675: INFO: Deleting pod "simpletest.rc-hbjmd" in namespace "gc-1877" Jan 28 00:54:39.744: INFO: Deleting pod "simpletest.rc-htxdp" in namespace "gc-1877" Jan 28 00:54:39.810: INFO: Deleting pod "simpletest.rc-jdfd7" in namespace "gc-1877" Jan 28 00:54:39.888: INFO: Deleting pod "simpletest.rc-jns9q" in namespace "gc-1877" Jan 28 00:54:39.959: INFO: Deleting pod "simpletest.rc-jzzg2" in namespace "gc-1877" Jan 28 00:54:40.029: INFO: Deleting pod "simpletest.rc-kfdkr" in namespace "gc-1877" Jan 28 00:54:40.106: INFO: Deleting pod "simpletest.rc-ks7d4" in namespace "gc-1877" Jan 28 00:54:40.175: INFO: Deleting pod "simpletest.rc-l67q7" in namespace "gc-1877" Jan 28 00:54:40.243: INFO: Deleting pod "simpletest.rc-l7s94" in namespace "gc-1877" Jan 28 00:54:40.321: INFO: Deleting pod "simpletest.rc-lgjc2" in namespace "gc-1877" Jan 28 00:54:40.389: INFO: Deleting pod "simpletest.rc-lvlhp" in namespace "gc-1877" Jan 28 00:54:40.457: INFO: Deleting pod "simpletest.rc-m2twk" in namespace "gc-1877" Jan 28 00:54:40.529: INFO: Deleting pod "simpletest.rc-m98pb" in namespace "gc-1877" Jan 28 00:54:40.601: INFO: Deleting pod "simpletest.rc-mjzcm" in namespace "gc-1877" Jan 28 00:54:40.672: INFO: Deleting pod "simpletest.rc-mkj8j" in namespace "gc-1877" Jan 28 00:54:40.740: INFO: Deleting pod "simpletest.rc-mtgnc" in namespace "gc-1877" Jan 28 00:54:40.807: INFO: Deleting pod "simpletest.rc-n8kl6" in namespace "gc-1877" Jan 28 00:54:40.878: INFO: Deleting pod "simpletest.rc-nk2xs" in namespace "gc-1877" Jan 28 00:54:40.951: INFO: Deleting pod "simpletest.rc-ntzfg" in namespace "gc-1877" Jan 28 00:54:41.020: INFO: Deleting pod "simpletest.rc-p5mm4" in namespace "gc-1877" Jan 28 00:54:41.087: INFO: Deleting pod "simpletest.rc-pkpxq" in namespace "gc-1877" Jan 28 00:54:41.157: INFO: Deleting pod "simpletest.rc-ps47t" in namespace "gc-1877" Jan 28 00:54:41.229: INFO: Deleting pod "simpletest.rc-ptrzw" in namespace "gc-1877" Jan 28 00:54:41.298: INFO: Deleting pod "simpletest.rc-q7mkz" in namespace "gc-1877" Jan 28 00:54:41.369: INFO: Deleting pod "simpletest.rc-q7t6m" in namespace "gc-1877" Jan 28 00:54:41.437: INFO: Deleting pod "simpletest.rc-q92rh" in namespace "gc-1877" Jan 28 00:54:41.508: INFO: Deleting pod "simpletest.rc-qck4r" in namespace "gc-1877" Jan 28 00:54:41.576: INFO: Deleting pod "simpletest.rc-qnfjq" in namespace "gc-1877" Jan 28 00:54:41.645: INFO: Deleting pod "simpletest.rc-qqjch" in namespace "gc-1877" Jan 28 00:54:41.715: INFO: Deleting pod "simpletest.rc-qt65r" in namespace "gc-1877" Jan 28 00:54:41.790: INFO: Deleting pod "simpletest.rc-r6jt5" in namespace "gc-1877" Jan 28 00:54:41.859: INFO: Deleting pod "simpletest.rc-rjb2q" in namespace "gc-1877" Jan 28 00:54:41.927: INFO: Deleting pod "simpletest.rc-rjhbh" in namespace "gc-1877" Jan 28 00:54:41.998: INFO: Deleting pod "simpletest.rc-rp8f4" in namespace "gc-1877" Jan 28 00:54:42.065: INFO: Deleting pod "simpletest.rc-s9cnr" in namespace "gc-1877" Jan 28 00:54:42.138: INFO: Deleting pod "simpletest.rc-ss89m" in namespace "gc-1877" Jan 28 00:54:42.205: INFO: Deleting pod "simpletest.rc-sslbv" in namespace "gc-1877" Jan 28 00:54:42.271: INFO: Deleting pod "simpletest.rc-tft7q" in namespace "gc-1877" Jan 28 00:54:42.339: INFO: Deleting pod "simpletest.rc-tnd5s" in namespace "gc-1877" Jan 28 00:54:42.410: INFO: Deleting pod "simpletest.rc-tr2kk" in namespace "gc-1877" Jan 28 00:54:42.481: INFO: Deleting pod "simpletest.rc-trjbg" in namespace "gc-1877" Jan 28 00:54:42.552: INFO: Deleting pod "simpletest.rc-tsssj" in namespace "gc-1877" Jan 28 00:54:42.620: INFO: Deleting pod "simpletest.rc-tv9v9" in namespace "gc-1877" Jan 28 00:54:42.689: INFO: Deleting pod "simpletest.rc-vbz6j" in namespace "gc-1877" Jan 28 00:54:42.757: INFO: Deleting pod "simpletest.rc-w5g8c" in namespace "gc-1877" Jan 28 00:54:42.826: INFO: Deleting pod "simpletest.rc-w5h5f" in namespace "gc-1877" Jan 28 00:54:42.897: INFO: Deleting pod "simpletest.rc-wljv4" in namespace "gc-1877" Jan 28 00:54:42.995: INFO: Deleting pod "simpletest.rc-x4tn5" in namespace "gc-1877" Jan 28 00:54:43.061: INFO: Deleting pod "simpletest.rc-x5nfg" in namespace "gc-1877" Jan 28 00:54:43.133: INFO: Deleting pod "simpletest.rc-x5zhn" in namespace "gc-1877" Jan 28 00:54:43.202: INFO: Deleting pod "simpletest.rc-xbkrv" in namespace "gc-1877" Jan 28 00:54:43.275: INFO: Deleting pod "simpletest.rc-xc8q6" in namespace "gc-1877" Jan 28 00:54:43.341: INFO: Deleting pod "simpletest.rc-xm8dk" in namespace "gc-1877" Jan 28 00:54:43.410: INFO: Deleting pod "simpletest.rc-zkxhd" in namespace "gc-1877" Jan 28 00:54:43.478: INFO: Deleting pod "simpletest.rc-zmtfb" in namespace "gc-1877" Jan 28 00:54:43.549: INFO: Deleting pod "simpletest.rc-zv7mt" in namespace "gc-1877" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 28 00:54:43.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-1877" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":61,"completed":28,"skipped":3185,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould not be blocked by dependency circle [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:54:43.749: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] test/e2e/framework/framework.go:652 Jan 28 00:54:44.441: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"3eac356d-bb2e-4044-957f-70bc4fe5908c", Controller:(*bool)(0xc002d03fe6), BlockOwnerDeletion:(*bool)(0xc002d03fe7)}} Jan 28 00:54:44.505: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2f05029e-3404-4bdb-a9de-87dca92038eb", Controller:(*bool)(0xc00400a276), BlockOwnerDeletion:(*bool)(0xc00400a277)}} Jan 28 00:54:44.573: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"930b3f10-526d-4e3a-9491-155d5f918807", Controller:(*bool)(0xc003d68386), BlockOwnerDeletion:(*bool)(0xc003d68387)}} [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 28 00:54:49.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-5731" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":61,"completed":29,"skipped":3212,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould retry creating failed daemon pods [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:54:49.840: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should retry creating failed daemon pods [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a simple DaemonSet "daemon-set" �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 28 00:54:50.658: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:54:50.720: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:54:50.720: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:54:51.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:54:51.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:54:51.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:54:52.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:54:52.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:54:52.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:54:53.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:54:53.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:54:53.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:54:54.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:54:54.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:54:54.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:54:55.791: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:54:55.853: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:54:55.853: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:54:56.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:54:56.851: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:54:56.851: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:54:57.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:54:57.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:54:57.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:54:58.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:54:58.851: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:54:58.851: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:54:59.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:54:59.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:54:59.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:00.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:00.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:00.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:01.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:01.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:01.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:02.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:02.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:02.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:03.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:03.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:03.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:04.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:04.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:04.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:05.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:05.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:05.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:06.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:06.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:06.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:07.786: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:07.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:07.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:08.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:08.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:08.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:09.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:09.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:09.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:10.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:10.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:10.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:11.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:11.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:11.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:12.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:12.909: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:12.909: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:13.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:13.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:13.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:14.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:14.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:14.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:15.786: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:15.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:15.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:16.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:16.851: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:16.851: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:17.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:17.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:17.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:18.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:18.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:18.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:19.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:19.852: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:19.852: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:20.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:20.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:20.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:21.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:21.851: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:21.851: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:22.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:22.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:22.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:23.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:23.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:23.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:24.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:24.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:24.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:25.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:25.908: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:25.908: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:26.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:26.856: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:26.856: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:27.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:27.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:27.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:28.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:28.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:28.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:29.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:29.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:29.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:30.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:30.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:30.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:31.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:31.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:31.851: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:32.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:32.851: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:32.851: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:33.788: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:33.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:33.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:34.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:34.854: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:34.854: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:35.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:35.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:35.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:36.789: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:36.914: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:36.914: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:37.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:37.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC}], skip checking this node Jan 28 00:55:37.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:37.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:38.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:38.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC}], skip checking this node Jan 28 00:55:38.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:38.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:39.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:39.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC}], skip checking this node Jan 28 00:55:39.858: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:39.858: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:40.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:40.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC}], skip checking this node Jan 28 00:55:40.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:40.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:41.790: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:41.790: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC}], skip checking this node Jan 28 00:55:41.853: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:41.853: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:42.786: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:42.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:42.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:42.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:43.786: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:43.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:43.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:43.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:44.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:44.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:44.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:44.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:45.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:45.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:45.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:45.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:46.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:46.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:46.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:46.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:47.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:47.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:47.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:47.849: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:48.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:48.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:48.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:48.850: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:49.787: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:49.787: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:49.853: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 28 00:55:49.853: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set �[1mSTEP�[0m: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 28 00:55:50.113: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:50.113: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:50.175: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:50.175: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:51.242: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:51.242: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:51.305: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:51.305: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:52.242: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:52.242: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:52.304: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:52.304: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:53.241: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:53.241: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:53.303: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:53.303: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:54.242: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:54.242: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:54.304: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:54.304: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:55.242: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:55.242: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:55.304: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:55.304: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:56.242: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:56.242: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:56.305: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:56.305: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:57.242: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:57.242: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:57.304: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:57.304: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:58.243: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:58.243: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:58.305: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:58.306: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:55:59.241: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:55:59.241: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:55:59.303: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:55:59.303: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 00:56:00.241: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 00:56:00.242: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-28 00:55:37 +0000 UTC} {Key:node.kubernetes.io/unreachable Value: Effect:NoExecute TimeAdded:2023-01-28 00:55:42 +0000 UTC}], skip checking this node Jan 28 00:56:00.304: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 28 00:56:00.304: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set �[1mSTEP�[0m: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1299, will wait for the garbage collector to delete the pods Jan 28 00:56:00.669: INFO: Deleting DaemonSet.extensions daemon-set took: 70.866637ms Jan 28 00:56:00.769: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.655206ms Jan 28 00:57:22.530: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 00:57:22.531: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 28 00:57:22.592: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"22891"},"items":null} Jan 28 00:57:22.653: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"22892"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 28 00:57:22.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-1299" for this suite. �[32m• [SLOW TEST:153.134 seconds]�[0m [sig-apps] Daemon set [Serial] �[90mtest/e2e/apps/framework.go:23�[0m should retry creating failed daemon pods [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":61,"completed":30,"skipped":3334,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] NoExecuteTaintManager Multiple Pods [Serial]�[0m �[1mevicts pods with minTolerationSeconds [Disruptive] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 00:57:22.975: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename taint-multiple-pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/node/taints.go:348 Jan 28 00:57:23.426: INFO: Waiting up to 1m0s for all nodes to be ready Jan 28 00:57:27.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:27.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:29.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:29.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:31.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:31.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:33.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:33.698: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:35.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:35.701: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:37.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:37.699: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:39.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:39.699: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:41.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:41.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:43.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:43.699: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:45.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:45.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:47.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:47.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:49.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:49.701: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:51.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:51.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:53.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:53.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:55.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:55.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:57.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:57.709: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:59.631: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:57:59.702: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:01.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:01.701: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:03.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:03.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:05.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:05.701: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:07.628: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:07.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:09.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:09.701: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:11.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:11.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:13.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:13.698: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:15.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:15.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:17.630: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:17.702: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:19.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:19.699: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:21.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:21.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:23.629: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:23.700: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:23.984: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:24.055: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:24.055: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] test/e2e/framework/framework.go:652 Jan 28 00:58:24.117: INFO: Starting informer... �[1mSTEP�[0m: Starting pods... Jan 28 00:58:24.307: INFO: Pod1 is running on capz-conf-mpgmr. Tainting Node Jan 28 00:58:30.615: INFO: Pod2 is running on capz-conf-mpgmr. Tainting Node �[1mSTEP�[0m: Trying to apply a taint on the Node �[1mSTEP�[0m: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[1mSTEP�[0m: Waiting for Pod1 and Pod2 to be deleted Jan 28 00:58:37.477: INFO: Noticed Pod "taint-eviction-b1" gets evicted. Jan 28 00:58:57.621: INFO: Noticed Pod "taint-eviction-b2" gets evicted. �[1mSTEP�[0m: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/framework.go:188 Jan 28 00:58:57.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 00:58:57.886: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:58:59.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:01.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:03.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:05.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:07.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:09.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:11.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:13.955: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:15.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:17.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:19.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:21.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:23.958: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:25.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:27.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:29.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:31.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:33.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:35.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:37.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:39.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:41.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:43.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:45.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:47.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:49.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:51.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:53.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:55.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:57.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 00:59:59.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:01.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:03.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:05.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:07.955: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:09.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:11.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:13.957: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:15.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:17.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:19.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:21.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:23.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:25.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:27.955: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:29.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:31.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:33.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:35.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:37.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:39.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:41.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:43.955: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:45.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:47.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:49.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:51.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:53.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:55.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:57.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:00:59.955: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:01.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:03.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:05.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:07.955: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:09.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:11.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:13.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:15.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:17.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:19.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:21.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:23.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:25.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:27.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:29.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:31.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:33.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:35.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:37.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:39.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:41.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:43.955: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:45.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:47.954: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:49.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:51.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:53.952: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:55.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:57.953: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:58.019: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:01:58.020: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "taint-multiple-pods-5614" for this suite. �[91m�[1m• Failure in Spec Teardown (AfterEach) [275.109 seconds]�[0m [sig-node] NoExecuteTaintManager Multiple Pods [Serial] �[90mtest/e2e/node/framework.go:23�[0m �[91m�[1mevicts pods with minTolerationSeconds [Disruptive] [Conformance] [AfterEach]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 28 01:01:58.020: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":61,"completed":30,"skipped":3395,"failed":1,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Cpu Resources [Serial]�[0m �[90mContainer limits�[0m �[1mshould not be exceeded after waiting 2 minutes�[0m �[37mtest/e2e/windows/cpu_limits.go:43�[0m [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:01:58.085: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cpu-resources-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be exceeded after waiting 2 minutes test/e2e/windows/cpu_limits.go:43 �[1mSTEP�[0m: Creating one pod with limit set to '0.5' Jan 28 01:01:58.646: INFO: The status of Pod cpulimittest-b18cfc0a-8288-404f-9c69-59ab4c7c3b87 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:02:00.710: INFO: The status of Pod cpulimittest-b18cfc0a-8288-404f-9c69-59ab4c7c3b87 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:02:02.708: INFO: The status of Pod cpulimittest-b18cfc0a-8288-404f-9c69-59ab4c7c3b87 is Running (Ready = true) �[1mSTEP�[0m: Creating one pod with limit set to '500m' Jan 28 01:02:02.896: INFO: The status of Pod cpulimittest-04bf60d3-5e4c-46ff-bbff-1ec2798a6a46 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:02:04.961: INFO: The status of Pod cpulimittest-04bf60d3-5e4c-46ff-bbff-1ec2798a6a46 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:02:06.960: INFO: The status of Pod cpulimittest-04bf60d3-5e4c-46ff-bbff-1ec2798a6a46 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:02:08.959: INFO: The status of Pod cpulimittest-04bf60d3-5e4c-46ff-bbff-1ec2798a6a46 is Running (Ready = true) �[1mSTEP�[0m: Waiting 2 minutes �[1mSTEP�[0m: Ensuring pods are still running �[1mSTEP�[0m: Ensuring cpu doesn't exceed limit by >5% �[1mSTEP�[0m: Gathering node summary stats Jan 28 01:04:09.289: INFO: Pod cpulimittest-b18cfc0a-8288-404f-9c69-59ab4c7c3b87 usage: 0.47219917400000005 �[1mSTEP�[0m: Gathering node summary stats Jan 28 01:04:09.424: INFO: Pod cpulimittest-04bf60d3-5e4c-46ff-bbff-1ec2798a6a46 usage: 0.49691447800000005 [AfterEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/framework.go:188 Jan 28 01:04:09.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:04:09.491: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:11.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:13.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:15.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:17.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:19.557: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:21.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:23.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:25.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:27.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:29.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:31.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:33.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:35.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:37.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:39.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:41.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:43.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:45.557: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:47.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:49.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:51.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:53.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:55.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:57.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:04:59.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:01.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:03.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:05.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:07.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:09.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:11.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:13.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:15.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:17.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:19.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:21.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:23.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:25.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:27.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:29.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:31.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:33.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:35.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:37.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:39.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:41.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:43.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:45.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:47.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:49.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:51.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:53.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:55.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:57.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:05:59.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:01.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:03.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:05.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:07.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:09.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:11.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:13.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:15.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:17.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:19.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:21.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:23.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:25.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:27.557: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:29.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:31.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:33.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:35.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:37.557: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:39.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:41.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:43.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:45.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:47.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:49.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:51.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:53.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:55.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:57.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:06:59.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:01.557: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:03.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:05.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:07.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:09.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:09.625: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:09.626: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "cpu-resources-test-windows-3910" for this suite. �[91m�[1m• Failure in Spec Teardown (AfterEach) [311.606 seconds]�[0m [sig-windows] [Feature:Windows] Cpu Resources [Serial] �[90mtest/e2e/windows/framework.go:27�[0m �[91m�[1mContainer limits [AfterEach]�[0m �[90mtest/e2e/windows/cpu_limits.go:42�[0m should not be exceeded after waiting 2 minutes �[90mtest/e2e/windows/cpu_limits.go:43�[0m �[91mJan 28 01:07:09.626: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","total":61,"completed":30,"skipped":3396,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that NodeSelector is respected if not matching [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:07:09.693: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-pred �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 28 01:07:10.123: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 28 01:07:10.189: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:12.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:14.255: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:16.255: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:18.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:20.255: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:22.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:24.257: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:26.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:28.257: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:30.255: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:32.358: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:34.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:36.258: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:38.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:40.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:42.257: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:44.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:46.257: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:48.255: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:50.255: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:52.257: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:54.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:56.257: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:07:58.257: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:00.257: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:02.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:04.256: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:06.255: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:08.257: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:10.255: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:10.321: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:10.388: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:10.388: INFO: Waiting for terminating namespaces to be deleted... Jan 28 01:08:10.449: INFO: Logging pods the apiserver thinks is on node capz-conf-mpgmr before test Jan 28 01:08:10.517: INFO: calico-node-windows-pkjkv from calico-system started at 2023-01-27 23:28:48 +0000 UTC (2 container statuses recorded) Jan 28 01:08:10.517: INFO: Container calico-node-felix ready: true, restart count 1 Jan 28 01:08:10.517: INFO: Container calico-node-startup ready: true, restart count 0 Jan 28 01:08:10.517: INFO: containerd-logger-7b895 from kube-system started at 2023-01-27 23:28:48 +0000 UTC (1 container statuses recorded) Jan 28 01:08:10.517: INFO: Container containerd-logger ready: true, restart count 0 Jan 28 01:08:10.517: INFO: csi-azuredisk-node-win-jxz5f from kube-system started at 2023-01-28 00:58:57 +0000 UTC (3 container statuses recorded) Jan 28 01:08:10.517: INFO: Container azuredisk ready: true, restart count 0 Jan 28 01:08:10.517: INFO: Container liveness-probe ready: true, restart count 0 Jan 28 01:08:10.517: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 28 01:08:10.517: INFO: csi-proxy-jp5b8 from kube-system started at 2023-01-28 00:58:57 +0000 UTC (1 container statuses recorded) Jan 28 01:08:10.517: INFO: Container csi-proxy ready: true, restart count 0 Jan 28 01:08:10.517: INFO: kube-proxy-windows-bd49q from kube-system started at 2023-01-27 23:28:48 +0000 UTC (1 container statuses recorded) Jan 28 01:08:10.517: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Trying to schedule Pod with nonempty NodeSelector. �[1mSTEP�[0m: Considering event: Type = [Warning], Name = [restricted-pod.173e53a79ad560b5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 28 01:08:11.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:08:11.905: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:13.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:15.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:17.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:19.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:21.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:23.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:25.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:27.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:29.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:31.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:33.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:35.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:37.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:39.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:41.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:43.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:45.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:47.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:49.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:51.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:53.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:55.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:57.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:08:59.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:01.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:03.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:05.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:07.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:09.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:11.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:13.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:15.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:17.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:19.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:21.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:23.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:25.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:27.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:29.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:31.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:33.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:35.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:37.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:39.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:41.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:43.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:45.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:47.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:49.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:51.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:53.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:55.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:57.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:09:59.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:01.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:03.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:05.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:07.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:09.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:11.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:13.975: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:15.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:17.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:19.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:21.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:23.975: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:25.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:27.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:29.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:31.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:33.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:35.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:37.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:39.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:41.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:43.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:45.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:47.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:49.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:51.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:53.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:55.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:57.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:10:59.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:11:01.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:11:03.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:11:05.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:11:07.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:11:09.973: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:11:11.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:11:12.260: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:11:12.260: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "sched-pred-2903" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[91m�[1m• Failure in Spec Teardown (AfterEach) [242.633 seconds]�[0m [sig-scheduling] SchedulerPredicates [Serial] �[90mtest/e2e/scheduling/framework.go:40�[0m �[91m�[1mvalidates that NodeSelector is respected if not matching [Conformance] [AfterEach]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 28 01:11:12.260: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":61,"completed":30,"skipped":3471,"failed":3,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Kubelet-Stats [Serial]�[0m �[90mKubelet stats collection for Windows nodes�[0m �[0mwhen running 10 pods�[0m �[1mshould return within 10 seconds�[0m �[37mtest/e2e/windows/kubelet_stats.go:47�[0m [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:11:12.327: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-stats-test-windows-serial �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should return within 10 seconds test/e2e/windows/kubelet_stats.go:47 �[1mSTEP�[0m: Selecting a Windows node Jan 28 01:11:12.823: INFO: Using node: capz-conf-mpgmr �[1mSTEP�[0m: Scheduling 10 pods Jan 28 01:11:12.954: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:12.961: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:12.962: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:12.963: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:13.007: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:13.014: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:13.014: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:13.014: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:13.014: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:13.015: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:15.019: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:15.023: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:15.024: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:15.024: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:15.070: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:15.076: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:15.076: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:15.077: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:15.077: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:15.078: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:17.017: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:17.024: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:17.025: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:17.026: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:17.069: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:17.078: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:17.078: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:17.079: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:17.080: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:17.080: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:19.024: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:19.025: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:19.026: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:19.026: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:19.071: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:19.076: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:19.078: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:19.078: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:19.079: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:19.080: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:21.016: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:21.024: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:21.024: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:21.025: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:21.069: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:21.077: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:21.077: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:21.079: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:21.079: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:21.080: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:23.017: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:23.023: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:23.024: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:23.025: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:23.070: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:23.130: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:23.190: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:23.249: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:23.249: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:23.309: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:25.016: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:25.024: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:25.025: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:25.025: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:25.069: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:25.077: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:25.077: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:25.078: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:25.079: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:25.080: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:27.016: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:27.024: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:27.025: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:27.025: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:27.070: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:27.077: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:27.078: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:27.078: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:27.079: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:27.080: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:29.017: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:29.023: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:29.024: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:29.025: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:29.069: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:29.077: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:29.077: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:29.078: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:29.079: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:29.080: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:31.017: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:31.024: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:31.025: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:31.025: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:31.069: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:31.078: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:31.078: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:31.079: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:31.080: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:31.081: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:33.018: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:33.025: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:33.025: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:33.026: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:33.070: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:33.077: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:33.078: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:33.079: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:33.079: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:33.080: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:35.016: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:35.024: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:35.024: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:35.025: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:35.071: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:35.078: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:35.078: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:35.079: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:35.080: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:35.081: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:37.016: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:37.024: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:37.024: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:37.025: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:37.069: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:37.077: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:37.077: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:37.078: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:37.079: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:37.080: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:39.019: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:39.023: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:39.024: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:39.025: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:39.070: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:39.076: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:39.077: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:39.078: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:39.079: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:39.080: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:41.019: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:41.024: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:41.025: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:41.026: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:41.070: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:41.078: INFO: The status of Pod statscollectiontest-c450a3f6-4441-43ab-be91-6dc8fdbd1d4b-0 is Running (Ready = true) Jan 28 01:11:41.079: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:41.079: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:41.080: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:41.081: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:43.017: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:43.024: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:43.025: INFO: The status of Pod statscollectiontest-e025a119-93b0-4b1d-8b3c-8e323b5a4432-4 is Running (Ready = true) Jan 28 01:11:43.025: INFO: The status of Pod statscollectiontest-b5d46515-d89c-4d4c-9056-52006c5ad549-9 is Running (Ready = true) Jan 28 01:11:43.070: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:43.079: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:43.079: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:43.081: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:43.081: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:45.017: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:45.024: INFO: The status of Pod statscollectiontest-eadf7af0-0d21-47f7-be86-e74a0458c884-3 is Running (Ready = true) Jan 28 01:11:45.069: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:45.077: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:45.078: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:45.078: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:45.079: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:47.016: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:47.071: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:47.077: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:47.077: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:47.078: INFO: The status of Pod statscollectiontest-d1f5c4aa-9cff-484c-9660-e3c3be2070c3-1 is Running (Ready = true) Jan 28 01:11:47.079: INFO: The status of Pod statscollectiontest-95766b8e-0593-4eab-a3de-febe95087d99-2 is Running (Ready = true) Jan 28 01:11:49.017: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:49.070: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:49.077: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:49.077: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:51.018: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:51.070: INFO: The status of Pod statscollectiontest-d2947928-7fc9-4db4-8cf4-4c00e64d4bfe-8 is Running (Ready = true) Jan 28 01:11:51.075: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:51.076: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:53.016: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:53.078: INFO: The status of Pod statscollectiontest-be9eb188-efcb-46ce-b652-54368697edf6-7 is Running (Ready = true) Jan 28 01:11:53.078: INFO: The status of Pod statscollectiontest-e6509167-78af-4be4-81c9-c4ca3ca1fc89-6 is Running (Ready = true) Jan 28 01:11:55.017: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Pending, waiting for it to be Running (with Ready = true) Jan 28 01:11:57.018: INFO: The status of Pod statscollectiontest-aaeb83a7-74ad-4308-b88b-b08eb644e053-5 is Running (Ready = true) �[1mSTEP�[0m: Waiting up to 3 minutes for pods to be running Jan 28 01:11:57.079: INFO: Waiting up to 3m0s for all pods (need at least 10) in namespace 'kubelet-stats-test-windows-serial-6739' to be running and ready Jan 28 01:11:57.271: INFO: 10 / 10 pods in namespace 'kubelet-stats-test-windows-serial-6739' are running and ready (0 seconds elapsed) Jan 28 01:11:57.271: INFO: expected 0 pod replicas in namespace 'kubelet-stats-test-windows-serial-6739', 0 are Running and Ready. �[1mSTEP�[0m: Getting kubelet stats 5 times and checking average duration Jan 28 01:12:23.073: INFO: Getting kubelet stats for node capz-conf-mpgmr took an average of 157 milliseconds over 5 iterations [AfterEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:188 Jan 28 01:12:23.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:12:23.141: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:25.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:27.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:29.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:31.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:33.210: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:35.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:37.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:39.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:41.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:43.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:45.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:47.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:49.210: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:51.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:53.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:55.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:57.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:12:59.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:01.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:03.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:05.207: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:07.207: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:09.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:11.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:13.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:15.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:17.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:19.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:21.207: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:23.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:25.207: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:27.557: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:29.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:31.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:33.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:35.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:37.241: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:39.207: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:41.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:43.207: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:45.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:47.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:49.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:51.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:53.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:55.207: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:57.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:13:59.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:01.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:03.210: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:05.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:07.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:09.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:11.210: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:13.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:15.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:17.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:19.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:21.210: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:23.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:25.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:27.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:29.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:31.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:33.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:35.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:37.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:39.207: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:41.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:43.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:45.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:47.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:49.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:51.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:53.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:55.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:57.207: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:14:59.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:01.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:03.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:05.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:07.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:09.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:11.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:13.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:15.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:17.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:19.208: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:21.209: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:23.207: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:23.274: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:23.274: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "kubelet-stats-test-windows-serial-6739" for this suite. �[91m�[1m• Failure in Spec Teardown (AfterEach) [251.012 seconds]�[0m [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] �[90mtest/e2e/windows/framework.go:27�[0m �[91m�[1mKubelet stats collection for Windows nodes [AfterEach]�[0m �[90mtest/e2e/windows/kubelet_stats.go:43�[0m when running 10 pods �[90mtest/e2e/windows/kubelet_stats.go:45�[0m should return within 10 seconds �[90mtest/e2e/windows/kubelet_stats.go:47�[0m �[91mJan 28 01:15:23.274: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","total":61,"completed":30,"skipped":3496,"failed":4,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow]�[0m �[90mGMSA support�[0m �[1mcan read and write file to remote SMB folder�[0m �[37mtest/e2e/windows/gmsa_full.go:167�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:15:23.341: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gmsa-full-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] can read and write file to remote SMB folder test/e2e/windows/gmsa_full.go:167 �[1mSTEP�[0m: finding the worker node that fulfills this test's assumptions Jan 28 01:15:23.833: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 28 01:15:23.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:15:23.900: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:25.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:27.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:29.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:31.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:33.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:35.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:37.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:39.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:41.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:43.972: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:45.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:47.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:49.966: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:51.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:53.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:55.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:57.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:15:59.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:01.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:03.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:05.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:07.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:09.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:11.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:13.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:15.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:17.966: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:19.976: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:21.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:23.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:25.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:27.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:29.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:31.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:33.966: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:35.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:37.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:39.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:41.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:43.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:45.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:47.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:49.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:51.966: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:53.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:55.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:57.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:16:59.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:01.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:03.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:05.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:07.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:09.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:11.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:13.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:15.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:17.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:19.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:21.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:23.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:25.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:27.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:29.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:31.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:33.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:35.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:37.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:39.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:41.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:43.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:45.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:47.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:49.970: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:51.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:53.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:55.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:57.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:17:59.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:01.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:03.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:05.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:07.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:09.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:11.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:13.971: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:15.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:17.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:19.969: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:21.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:23.968: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:24.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:24.035: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "gmsa-full-test-windows-2664" for this suite. �[36m�[1mS [SKIPPING] [180.760 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[90mtest/e2e/windows/framework.go:27�[0m GMSA support �[90mtest/e2e/windows/gmsa_full.go:96�[0m �[36m�[1mcan read and write file to remote SMB folder [It]�[0m �[90mtest/e2e/windows/gmsa_full.go:167�[0m �[36mExpected to find exactly one node with the "agentpool=windowsgmsa" label, found 0�[0m test/e2e/windows/gmsa_full.go:173 �[90m------------------------------�[0m �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould rollback without unnecessary restarts [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:18:24.104: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 Jan 28 01:18:24.664: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure [It] should rollback without unnecessary restarts [Conformance] test/e2e/framework/framework.go:652 Jan 28 01:18:24.792: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:24.792: FAIL: Conformance test suite needs a cluster with at least 2 nodes. Expected <int>: 1 to be > <int>: 1 Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func3.9() test/e2e/apps/daemon_set.go:434 +0x1dd k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 Jan 28 01:18:24.919: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"27219"},"items":null} Jan 28 01:18:24.980: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"27219"},"items":null} Jan 28 01:18:25.046: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 �[1mSTEP�[0m: Collecting events from namespace "daemonsets-8122". �[1mSTEP�[0m: Found 0 events. Jan 28 01:18:25.231: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 01:18:25.231: INFO: Jan 28 01:18:25.297: INFO: Logging node info for node capz-conf-cdfcgm-control-plane-t22kx Jan 28 01:18:25.360: INFO: Node Info: &Node{ObjectMeta:{capz-conf-cdfcgm-control-plane-t22kx 24f43a17-0b95-4e36-a475-6ad31c91f615 27099 0 2023-01-27 23:17:40 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_B2s beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus3 failure-domain.beta.kubernetes.io/zone:westus3-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-cdfcgm-control-plane-t22kx kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_B2s topology.disk.csi.azure.com/zone:westus3-1 topology.kubernetes.io/region:westus3 topology.kubernetes.io/zone:westus3-1] map[cluster.x-k8s.io/cluster-name:capz-conf-cdfcgm cluster.x-k8s.io/cluster-namespace:capz-conf-cdfcgm cluster.x-k8s.io/machine:capz-conf-cdfcgm-control-plane-j7gpw cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-cdfcgm-control-plane csi.volume.kubernetes.io/nodeid:{"csi.tigera.io":"capz-conf-cdfcgm-control-plane-t22kx","disk.csi.azure.com":"capz-conf-cdfcgm-control-plane-t22kx"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.252.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-27 23:17:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-27 23:17:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2023-01-27 23:18:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2023-01-27 23:18:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {calico-node Update v1 2023-01-27 23:18:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-27 23:20:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-conf-cdfcgm/providers/Microsoft.Compute/virtualMachines/capz-conf-cdfcgm-control-plane-t22kx,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4123176960 0} {<nil>} 4026540Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4018319360 0} {<nil>} 3924140Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-27 23:18:57 +0000 UTC,LastTransitionTime:2023-01-27 23:18:57 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 01:17:44 +0000 UTC,LastTransitionTime:2023-01-27 23:17:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 01:17:44 +0000 UTC,LastTransitionTime:2023-01-27 23:17:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 01:17:44 +0000 UTC,LastTransitionTime:2023-01-27 23:17:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 01:17:44 +0000 UTC,LastTransitionTime:2023-01-27 23:18:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-cdfcgm-control-plane-t22kx,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:33363279bc3948bea8a995390b326f3d,SystemUUID:5c3db9ec-3ae1-1b43-81c4-48ae830ee7ed,BootID:defbdda8-1954-4864-bb36-9df831adfe69,KernelVersion:5.4.0-1100-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.24.11-rc.0.6+7c685ed7305e76,KubeProxyVersion:v1.24.11-rc.0.6+7c685ed7305e76,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-apiserver-amd64:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-apiserver:v1.24.11-rc.0.6_7c685ed7305e76],SizeBytes:128011592,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-controller-manager-amd64:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-controller-manager:v1.24.11-rc.0.6_7c685ed7305e76],SizeBytes:117619886,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-proxy:v1.24.11-rc.0.6_7c685ed7305e76],SizeBytes:112212023,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:907b259fe0c9f5adda9f00a91b8a8228f4f38768021fb6d05cbad0538ef8f99a mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.1],SizeBytes:96300330,},ContainerImage{Names:[docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977 docker.io/calico/cni:v3.25.0],SizeBytes:87984941,},ContainerImage{Names:[docker.io/calico/node@sha256:a85123d1882832af6c45b5e289c6bb99820646cb7d4f6006f98095168808b1e6 docker.io/calico/node:v3.25.0],SizeBytes:87185935,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:3ef7d954946bd1cf9e5e3564a8d1acf8e5852616f7ae96bcbc5ced8c275483ee mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0],SizeBytes:61391360,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:9ba6483d2f8aa6051cb3a50e42d638fc17a6e4699a6689f054969024b7c12944 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0],SizeBytes:58560473,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:bc317fea7e7bbaff65130d7ac6ea7c96bc15eb1f086374b8c3359f11988ac024 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0],SizeBytes:57948644,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-scheduler-amd64:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-scheduler:v1.24.11-rc.0.6_7c685ed7305e76],SizeBytes:49028781,},ContainerImage{Names:[docker.io/calico/apiserver@sha256:9819c1b569e60eec4dbab82c1b41cee80fe8af282b25ba2c174b2a00ae555af6 docker.io/calico/apiserver:v3.25.0],SizeBytes:35624155,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26 registry.k8s.io/kube-apiserver:v1.26.1],SizeBytes:35320235,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b registry.k8s.io/kube-controller-manager:v1.26.1],SizeBytes:32245960,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:c45af3a9692d87a527451cf544557138fedf86f92b6e39bf2003e2fdb848dce3 docker.io/calico/kube-controllers:v3.25.0],SizeBytes:31271800,},ContainerImage{Names:[docker.io/calico/typha@sha256:f7e0557e03f422c8ba5fcf64ef0fac054ee99935b5d101a0a50b5e9b65f6a5c5 docker.io/calico/typha:v3.25.0],SizeBytes:28533187,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:a889e925e15f9423f7842f1b769f64cbcf6a20b6956122836fc835cf22d9073f mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1],SizeBytes:22192414,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3 registry.k8s.io/kube-proxy:v1.26.1],SizeBytes:21536169,},ContainerImage{Names:[quay.io/tigera/operator@sha256:89eef35e1bbe8c88792ce69c3f3f38fb9838e58602c570524350b5f3ab127582 quay.io/tigera/operator:v1.29.0],SizeBytes:21108896,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4 registry.k8s.io/kube-scheduler:v1.26.1],SizeBytes:17486267,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[docker.io/calico/node-driver-registrar@sha256:f559ee53078266d2126732303f588b9d4266607088e457ea04286f31727676f7 docker.io/calico/node-driver-registrar:v3.25.0],SizeBytes:11133658,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:515b883deb0ae8d58eef60312f4d460ff8a3f52a2a5e487c94a8ebb2ca362720 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2],SizeBytes:10076715,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:fcb73e1939d9abeb2d1e1680b476a10a422a04a73ea5a65e64eec3fde1f2a5a1 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0],SizeBytes:9117963,},ContainerImage{Names:[docker.io/calico/csi@sha256:61a95f3ee79a7e591aff9eff535be73e62d2c3931d07c2ea8a1305f7bea19b31 docker.io/calico/csi:v3.25.0],SizeBytes:9076936,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:01ddd57d428787b3ac689daa685660defe4bd7810069544bd43a9103a7b0a789 docker.io/calico/pod2daemon-flexvol:v3.25.0],SizeBytes:7076045,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 01:18:25.360: INFO: Logging kubelet events for node capz-conf-cdfcgm-control-plane-t22kx Jan 28 01:18:25.422: INFO: Logging pods the kubelet thinks is on node capz-conf-cdfcgm-control-plane-t22kx Jan 28 01:18:25.526: INFO: calico-typha-79478bc8f-4cnvr started at 2023-01-27 23:18:15 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container calico-typha ready: true, restart count 0 Jan 28 01:18:25.526: INFO: csi-node-driver-rb82t started at 2023-01-27 23:18:46 +0000 UTC (0+2 container statuses recorded) Jan 28 01:18:25.526: INFO: Container calico-csi ready: true, restart count 0 Jan 28 01:18:25.526: INFO: Container csi-node-driver-registrar ready: true, restart count 0 Jan 28 01:18:25.526: INFO: calico-apiserver-764b4b8b98-zksqp started at 2023-01-27 23:19:09 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container calico-apiserver ready: true, restart count 0 Jan 28 01:18:25.526: INFO: metrics-server-7d674f87b8-ssshr started at 2023-01-27 23:18:45 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container metrics-server ready: true, restart count 0 Jan 28 01:18:25.526: INFO: etcd-capz-conf-cdfcgm-control-plane-t22kx started at 2023-01-27 23:17:45 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container etcd ready: true, restart count 0 Jan 28 01:18:25.526: INFO: kube-controller-manager-capz-conf-cdfcgm-control-plane-t22kx started at 2023-01-27 23:17:44 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container kube-controller-manager ready: true, restart count 0 Jan 28 01:18:25.526: INFO: tigera-operator-65d6bf4d4f-v6zc9 started at 2023-01-27 23:18:05 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container tigera-operator ready: true, restart count 0 Jan 28 01:18:25.526: INFO: calico-node-hssc9 started at 2023-01-27 23:18:15 +0000 UTC (2+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 28 01:18:25.526: INFO: Init container install-cni ready: true, restart count 0 Jan 28 01:18:25.526: INFO: Container calico-node ready: true, restart count 0 Jan 28 01:18:25.526: INFO: coredns-57575c5f89-pbph2 started at 2023-01-27 23:18:45 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container coredns ready: true, restart count 0 Jan 28 01:18:25.526: INFO: coredns-57575c5f89-hndwr started at 2023-01-27 23:18:45 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container coredns ready: true, restart count 0 Jan 28 01:18:25.526: INFO: csi-azuredisk-node-z9n4w started at 2023-01-27 23:19:38 +0000 UTC (0+3 container statuses recorded) Jan 28 01:18:25.526: INFO: Container azuredisk ready: true, restart count 0 Jan 28 01:18:25.526: INFO: Container liveness-probe ready: true, restart count 0 Jan 28 01:18:25.526: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 28 01:18:25.526: INFO: csi-azuredisk-controller-545d478dbf-ckd2x started at 2023-01-27 23:19:38 +0000 UTC (0+6 container statuses recorded) Jan 28 01:18:25.526: INFO: Container azuredisk ready: true, restart count 0 Jan 28 01:18:25.526: INFO: Container csi-attacher ready: true, restart count 0 Jan 28 01:18:25.526: INFO: Container csi-provisioner ready: true, restart count 0 Jan 28 01:18:25.526: INFO: Container csi-resizer ready: true, restart count 0 Jan 28 01:18:25.526: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 28 01:18:25.526: INFO: Container liveness-probe ready: true, restart count 0 Jan 28 01:18:25.526: INFO: kube-apiserver-capz-conf-cdfcgm-control-plane-t22kx started at 2023-01-27 23:17:25 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container kube-apiserver ready: true, restart count 0 Jan 28 01:18:25.526: INFO: kube-scheduler-capz-conf-cdfcgm-control-plane-t22kx started at 2023-01-27 23:17:45 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container kube-scheduler ready: true, restart count 0 Jan 28 01:18:25.526: INFO: kube-proxy-q9z28 started at 2023-01-27 23:18:04 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container kube-proxy ready: true, restart count 0 Jan 28 01:18:25.526: INFO: calico-kube-controllers-594d54f99-9n858 started at 2023-01-27 23:18:45 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 28 01:18:25.526: INFO: calico-apiserver-764b4b8b98-qtnln started at 2023-01-27 23:19:09 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:25.526: INFO: Container calico-apiserver ready: true, restart count 0 Jan 28 01:18:25.834: INFO: Latency metrics for node capz-conf-cdfcgm-control-plane-t22kx Jan 28 01:18:25.834: INFO: Logging node info for node capz-conf-mpgmr Jan 28 01:18:25.896: INFO: Node Info: &Node{ObjectMeta:{capz-conf-mpgmr dd1b355d-7fe1-4651-b0f7-8d3a6e146fcb 26385 0 2023-01-27 23:28:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:westus3 failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-mpgmr kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.disk.csi.azure.com/zone: topology.kubernetes.io/region:westus3 topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-cdfcgm cluster.x-k8s.io/cluster-namespace:capz-conf-cdfcgm cluster.x-k8s.io/machine:capz-conf-cdfcgm-md-win-5ffc7d6c68-nfd8h cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-cdfcgm-md-win-5ffc7d6c68 csi.volume.kubernetes.io/nodeid:{"disk.csi.azure.com":"capz-conf-mpgmr"} kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.241.1 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:ad:19:bd volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-01-27 23:28:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2023-01-27 23:28:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-27 23:28:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2023-01-27 23:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {calico-node.exe Update v1 2023-01-27 23:30:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2023-01-28 00:43:43 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}} status} {kubelet.exe Update v1 2023-01-28 00:59:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-conf-cdfcgm/providers/Microsoft.Compute/virtualMachines/capz-conf-mpgmr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 01:14:29 +0000 UTC,LastTransitionTime:2023-01-27 23:28:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 01:14:29 +0000 UTC,LastTransitionTime:2023-01-27 23:28:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 01:14:29 +0000 UTC,LastTransitionTime:2023-01-27 23:28:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 01:14:29 +0000 UTC,LastTransitionTime:2023-01-27 23:29:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-mpgmr,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-mpgmr,SystemUUID:5BB50764-E7C8-4A87-9C69-47772A798650,BootID:9,KernelVersion:10.0.17763.3887,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.24.11-rc.0.6+7c685ed7305e76,KubeProxyVersion:v1.24.11-rc.0.6+7c685ed7305e76,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:907b259fe0c9f5adda9f00a91b8a8228f4f38768021fb6d05cbad0538ef8f99a mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.1],SizeBytes:130115533,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.24.11-rc.0.6_7c685ed7305e76-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:515b883deb0ae8d58eef60312f4d460ff8a3f52a2a5e487c94a8ebb2ca362720 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2],SizeBytes:112797444,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:fcb73e1939d9abeb2d1e1680b476a10a422a04a73ea5a65e64eec3fde1f2a5a1 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0],SizeBytes:111834447,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/resource-consumer@sha256:89f16100a57624bfa729b9e50c941b46a4fdceaa8818b96bdad6cab8ff44ca45 k8s.gcr.io/e2e-test-images/resource-consumer:1.10],SizeBytes:105490980,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:2082c9b6488b3a2839141f472740c36484d5cbc91f7c24d67bc77ea311d4602b docker.io/sigwindowstools/calico-install:v3.24.5-hostprocess],SizeBytes:49820336,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ba0ac4633a832430a00374ef6cf1c701797017b8d09ccc3fb12db253e250887a docker.io/sigwindowstools/calico-node:v3.24.5-hostprocess],SizeBytes:28623190,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 01:18:25.897: INFO: Logging kubelet events for node capz-conf-mpgmr Jan 28 01:18:25.961: INFO: Logging pods the kubelet thinks is on node capz-conf-mpgmr Jan 28 01:18:26.044: INFO: containerd-logger-7b895 started at 2023-01-27 23:28:48 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:26.044: INFO: Container containerd-logger ready: true, restart count 0 Jan 28 01:18:26.044: INFO: csi-proxy-jp5b8 started at 2023-01-28 00:58:57 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:26.044: INFO: Container csi-proxy ready: true, restart count 0 Jan 28 01:18:26.044: INFO: calico-node-windows-pkjkv started at 2023-01-27 23:28:48 +0000 UTC (1+2 container statuses recorded) Jan 28 01:18:26.044: INFO: Init container install-cni ready: true, restart count 0 Jan 28 01:18:26.044: INFO: Container calico-node-felix ready: true, restart count 1 Jan 28 01:18:26.044: INFO: Container calico-node-startup ready: true, restart count 0 Jan 28 01:18:26.044: INFO: csi-azuredisk-node-win-jxz5f started at 2023-01-28 00:58:57 +0000 UTC (1+3 container statuses recorded) Jan 28 01:18:26.044: INFO: Init container init ready: true, restart count 0 Jan 28 01:18:26.044: INFO: Container azuredisk ready: true, restart count 0 Jan 28 01:18:26.044: INFO: Container liveness-probe ready: true, restart count 0 Jan 28 01:18:26.044: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 28 01:18:26.044: INFO: kube-proxy-windows-bd49q started at 2023-01-27 23:28:48 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:26.044: INFO: Container kube-proxy ready: true, restart count 0 Jan 28 01:18:26.301: INFO: Latency metrics for node capz-conf-mpgmr Jan 28 01:18:26.301: INFO: Logging node info for node capz-conf-x4p77 Jan 28 01:18:26.363: INFO: Node Info: &Node{ObjectMeta:{capz-conf-x4p77 d0abe981-3d75-4029-a215-cb3e43a31a4a 27118 0 2023-01-27 23:28:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:westus3 failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-x4p77 kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.disk.csi.azure.com/zone: topology.kubernetes.io/region:westus3 topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-cdfcgm cluster.x-k8s.io/cluster-namespace:capz-conf-cdfcgm cluster.x-k8s.io/machine:capz-conf-cdfcgm-md-win-5ffc7d6c68-hk7x9 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-cdfcgm-md-win-5ffc7d6c68 csi.volume.kubernetes.io/nodeid:{"disk.csi.azure.com":"capz-conf-x4p77"} kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.243.129 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:c3:0c:f8 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-01-27 23:28:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2023-01-27 23:28:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {manager Update v1 2023-01-27 23:29:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {calico-node.exe Update v1 2023-01-27 23:29:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2023-01-28 00:43:43 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}} status} {kubelet.exe Update v1 2023-01-28 00:57:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-28 00:57:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} }]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-conf-cdfcgm/providers/Microsoft.Compute/virtualMachines/capz-conf-x4p77,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2023-01-28 00:57:26 +0000 UTC,},Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2023-01-28 00:57:27 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 01:17:51 +0000 UTC,LastTransitionTime:2023-01-28 00:57:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 01:17:51 +0000 UTC,LastTransitionTime:2023-01-28 00:57:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 01:17:51 +0000 UTC,LastTransitionTime:2023-01-28 00:57:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2023-01-28 01:17:51 +0000 UTC,LastTransitionTime:2023-01-28 00:57:27 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-x4p77,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-x4p77,SystemUUID:2CA02D1E-9691-4305-8E38-04210F142531,BootID:10,KernelVersion:10.0.17763.3887,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.24.11-rc.0.6+7c685ed7305e76,KubeProxyVersion:v1.24.11-rc.0.6+7c685ed7305e76,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:907b259fe0c9f5adda9f00a91b8a8228f4f38768021fb6d05cbad0538ef8f99a mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.1],SizeBytes:130115533,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.24.11-rc.0.6_7c685ed7305e76-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:515b883deb0ae8d58eef60312f4d460ff8a3f52a2a5e487c94a8ebb2ca362720 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2],SizeBytes:112797444,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:fcb73e1939d9abeb2d1e1680b476a10a422a04a73ea5a65e64eec3fde1f2a5a1 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0],SizeBytes:111834447,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/resource-consumer@sha256:89f16100a57624bfa729b9e50c941b46a4fdceaa8818b96bdad6cab8ff44ca45 k8s.gcr.io/e2e-test-images/resource-consumer:1.10],SizeBytes:105490980,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:2082c9b6488b3a2839141f472740c36484d5cbc91f7c24d67bc77ea311d4602b docker.io/sigwindowstools/calico-install:v3.24.5-hostprocess],SizeBytes:49820336,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ba0ac4633a832430a00374ef6cf1c701797017b8d09ccc3fb12db253e250887a docker.io/sigwindowstools/calico-node:v3.24.5-hostprocess],SizeBytes:28623190,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 01:18:26.363: INFO: Logging kubelet events for node capz-conf-x4p77 Jan 28 01:18:26.425: INFO: Logging pods the kubelet thinks is on node capz-conf-x4p77 Jan 28 01:18:26.512: INFO: kube-proxy-windows-98j6m started at 2023-01-27 23:28:51 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:26.512: INFO: Container kube-proxy ready: true, restart count 2 Jan 28 01:18:26.512: INFO: containerd-logger-bqsnj started at 2023-01-27 23:28:51 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:26.512: INFO: Container containerd-logger ready: true, restart count 1 Jan 28 01:18:26.512: INFO: calico-node-windows-n6ccv started at 2023-01-27 23:28:51 +0000 UTC (1+2 container statuses recorded) Jan 28 01:18:26.512: INFO: Init container install-cni ready: true, restart count 1 Jan 28 01:18:26.512: INFO: Container calico-node-felix ready: true, restart count 2 Jan 28 01:18:26.512: INFO: Container calico-node-startup ready: true, restart count 1 Jan 28 01:18:26.512: INFO: csi-proxy-rhfls started at 2023-01-27 23:29:22 +0000 UTC (0+1 container statuses recorded) Jan 28 01:18:26.512: INFO: Container csi-proxy ready: true, restart count 1 Jan 28 01:18:26.512: INFO: csi-azuredisk-node-win-qcbvj started at 2023-01-27 23:29:22 +0000 UTC (1+3 container statuses recorded) Jan 28 01:18:26.512: INFO: Init container init ready: true, restart count 1 Jan 28 01:18:26.512: INFO: Container azuredisk ready: true, restart count 1 Jan 28 01:18:26.512: INFO: Container liveness-probe ready: true, restart count 1 Jan 28 01:18:26.512: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 28 01:18:26.774: INFO: Latency metrics for node capz-conf-x4p77 Jan 28 01:18:26.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:18:26.840: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:28.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:30.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:32.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:34.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:36.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:38.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:40.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:42.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:44.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:46.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:48.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:50.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:52.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:54.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:56.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:18:58.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:00.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:02.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:04.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:06.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:08.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:10.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:12.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:14.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:16.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:18.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:20.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:22.909: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:24.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:26.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:28.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:30.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:32.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:34.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:36.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:38.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:40.909: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:42.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:44.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:46.910: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:48.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:50.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:52.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:54.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:56.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:19:58.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:00.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:02.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:04.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:06.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:08.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:10.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:12.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:14.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:16.909: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:18.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:20.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:22.909: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:24.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:26.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:28.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:30.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:32.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:34.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:36.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:38.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:40.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:42.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:44.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:46.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:48.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:50.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:52.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:54.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:56.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:20:58.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:00.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:02.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:04.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:06.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:08.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:10.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:12.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:14.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:16.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:18.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:20.906: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:22.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:24.907: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:26.908: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:26.975: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:21:26.975: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "daemonsets-8122" for this suite. �[91m�[1m• Failure [182.937 seconds]�[0m [sig-apps] Daemon set [Serial] �[90mtest/e2e/apps/framework.go:23�[0m �[91m�[1mshould rollback without unnecessary restarts [Conformance] [It]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 28 01:18:24.792: Conformance test suite needs a cluster with at least 2 nodes. Expected <int>: 1 to be > <int>: 1�[0m test/e2e/apps/daemon_set.go:434 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/apps.glob..func3.9() test/e2e/apps/daemon_set.go:434 +0x1dd k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":61,"completed":30,"skipped":3634,"failed":5,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] Deployment�[0m �[1mShould scale from 5 pods to 3 pods and from 3 to 1�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:43�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:21:27.045: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 5 pods to 3 pods and from 3 to 1 test/e2e/autoscaling/horizontal_pod_autoscaling.go:43 �[1mSTEP�[0m: Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 5 replicas �[1mSTEP�[0m: creating deployment test-deployment in namespace horizontal-pod-autoscaling-2226 I0128 01:21:27.614192 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-2226, replica count: 5 I0128 01:21:37.716269 14 runners.go:193] test-deployment Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: Running controller �[1mSTEP�[0m: creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-2226 I0128 01:21:37.855229 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-2226, replica count: 1 I0128 01:21:47.956710 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 28 01:21:52.957: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Jan 28 01:21:53.019: INFO: RC test-deployment: consume 325 millicores in total Jan 28 01:21:53.019: INFO: RC test-deployment: sending request to consume 0 millicores Jan 28 01:21:53.019: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=0&requestSizeMillicores=100 } Jan 28 01:21:53.085: INFO: RC test-deployment: setting consumption to 325 millicores in total Jan 28 01:21:53.085: INFO: RC test-deployment: consume 0 MB in total Jan 28 01:21:53.085: INFO: RC test-deployment: setting consumption to 0 MB in total Jan 28 01:21:53.085: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:21:53.085: INFO: RC test-deployment: consume custom metric 0 in total Jan 28 01:21:53.085: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:21:53.085: INFO: RC test-deployment: setting bump of metric QPS to 0 in total Jan 28 01:21:53.085: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:21:53.085: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:21:53.211: INFO: waiting for 3 replicas (current: 5) Jan 28 01:22:13.273: INFO: waiting for 3 replicas (current: 5) Jan 28 01:22:23.086: INFO: RC test-deployment: sending request to consume 325 millicores Jan 28 01:22:23.086: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 01:22:23.150: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:22:23.151: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:22:23.151: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:22:23.152: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:22:33.276: INFO: waiting for 3 replicas (current: 5) Jan 28 01:22:53.175: INFO: RC test-deployment: sending request to consume 325 millicores Jan 28 01:22:53.175: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 01:22:53.219: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:22:53.219: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:22:53.221: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:22:53.221: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:22:53.274: INFO: waiting for 3 replicas (current: 5) Jan 28 01:23:13.274: INFO: waiting for 3 replicas (current: 5) Jan 28 01:23:23.255: INFO: RC test-deployment: sending request to consume 325 millicores Jan 28 01:23:23.255: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 01:23:23.283: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:23:23.283: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:23:23.307: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:23:23.308: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:23:33.274: INFO: waiting for 3 replicas (current: 5) Jan 28 01:23:53.278: INFO: waiting for 3 replicas (current: 5) Jan 28 01:23:53.330: INFO: RC test-deployment: sending request to consume 325 millicores Jan 28 01:23:53.330: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 01:23:53.361: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:23:53.361: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:23:53.389: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:23:53.390: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:24:13.276: INFO: waiting for 3 replicas (current: 5) Jan 28 01:24:23.404: INFO: RC test-deployment: sending request to consume 325 millicores Jan 28 01:24:23.404: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 01:24:23.440: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:24:23.441: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:24:23.468: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:24:23.468: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:24:33.274: INFO: waiting for 3 replicas (current: 5) Jan 28 01:24:53.274: INFO: waiting for 3 replicas (current: 5) Jan 28 01:24:53.481: INFO: RC test-deployment: sending request to consume 325 millicores Jan 28 01:24:53.481: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 01:24:53.509: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:24:53.509: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:24:53.532: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:24:53.532: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:25:13.274: INFO: waiting for 3 replicas (current: 5) Jan 28 01:25:23.555: INFO: RC test-deployment: sending request to consume 325 millicores Jan 28 01:25:23.555: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 01:25:23.576: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:25:23.576: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:25:23.601: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:25:23.601: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:25:33.274: INFO: waiting for 3 replicas (current: 5) Jan 28 01:25:53.274: INFO: waiting for 3 replicas (current: 5) Jan 28 01:25:53.628: INFO: RC test-deployment: sending request to consume 325 millicores Jan 28 01:25:53.628: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 01:25:53.647: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:25:53.647: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:25:53.680: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:25:53.680: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:26:13.278: INFO: waiting for 3 replicas (current: 5) Jan 28 01:26:23.701: INFO: RC test-deployment: sending request to consume 325 millicores Jan 28 01:26:23.702: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 01:26:23.719: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:26:23.719: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:26:23.750: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:26:23.750: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:26:33.274: INFO: waiting for 3 replicas (current: 5) Jan 28 01:26:53.277: INFO: waiting for 3 replicas (current: 5) Jan 28 01:26:53.776: INFO: RC test-deployment: sending request to consume 325 millicores Jan 28 01:26:53.776: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 28 01:26:53.785: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:26:53.785: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:26:53.819: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:26:53.819: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:27:13.274: INFO: waiting for 3 replicas (current: 3) Jan 28 01:27:13.274: INFO: RC test-deployment: consume 10 millicores in total Jan 28 01:27:13.274: INFO: RC test-deployment: setting consumption to 10 millicores in total Jan 28 01:27:13.335: INFO: waiting for 1 replicas (current: 3) Jan 28 01:27:23.847: INFO: RC test-deployment: sending request to consume 10 millicores Jan 28 01:27:23.847: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 01:27:23.851: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:27:23.851: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:27:23.885: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:27:23.886: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:27:33.397: INFO: waiting for 1 replicas (current: 3) Jan 28 01:27:53.397: INFO: waiting for 1 replicas (current: 3) Jan 28 01:27:53.914: INFO: RC test-deployment: sending request to consume 10 millicores Jan 28 01:27:53.914: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 01:27:53.914: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:27:53.914: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:27:53.949: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:27:53.949: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:28:13.402: INFO: waiting for 1 replicas (current: 3) Jan 28 01:28:23.979: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:28:23.979: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:28:23.982: INFO: RC test-deployment: sending request to consume 10 millicores Jan 28 01:28:23.982: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 01:28:24.013: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:28:24.013: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:28:33.397: INFO: waiting for 1 replicas (current: 3) Jan 28 01:28:53.397: INFO: waiting for 1 replicas (current: 3) Jan 28 01:28:54.043: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:28:54.043: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:28:54.050: INFO: RC test-deployment: sending request to consume 10 millicores Jan 28 01:28:54.050: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 01:28:54.076: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:28:54.076: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:29:13.397: INFO: waiting for 1 replicas (current: 3) Jan 28 01:29:24.107: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:29:24.107: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:29:24.119: INFO: RC test-deployment: sending request to consume 10 millicores Jan 28 01:29:24.119: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 01:29:24.140: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:29:24.140: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:29:33.397: INFO: waiting for 1 replicas (current: 3) Jan 28 01:29:53.398: INFO: waiting for 1 replicas (current: 3) Jan 28 01:29:54.171: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:29:54.171: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:29:54.187: INFO: RC test-deployment: sending request to consume 10 millicores Jan 28 01:29:54.187: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 01:29:54.203: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:29:54.204: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:30:13.398: INFO: waiting for 1 replicas (current: 3) Jan 28 01:30:24.238: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:30:24.238: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:30:24.255: INFO: RC test-deployment: sending request to consume 10 millicores Jan 28 01:30:24.256: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 01:30:24.268: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:30:24.268: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:30:33.398: INFO: waiting for 1 replicas (current: 3) Jan 28 01:30:53.397: INFO: waiting for 1 replicas (current: 3) Jan 28 01:30:54.301: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:30:54.301: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:30:54.324: INFO: RC test-deployment: sending request to consume 10 millicores Jan 28 01:30:54.324: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 01:30:54.331: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:30:54.331: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:31:13.397: INFO: waiting for 1 replicas (current: 3) Jan 28 01:31:24.366: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:31:24.366: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:31:24.391: INFO: RC test-deployment: sending request to consume 10 millicores Jan 28 01:31:24.391: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 01:31:24.394: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:31:24.395: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:31:33.397: INFO: waiting for 1 replicas (current: 3) Jan 28 01:31:53.397: INFO: waiting for 1 replicas (current: 3) Jan 28 01:31:54.430: INFO: RC test-deployment: sending request to consume 0 MB Jan 28 01:31:54.430: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:31:54.460: INFO: RC test-deployment: sending request to consume 10 millicores Jan 28 01:31:54.460: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 28 01:31:54.460: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 28 01:31:54.460: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2226/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:32:13.397: INFO: waiting for 1 replicas (current: 1) �[1mSTEP�[0m: Removing consuming RC test-deployment Jan 28 01:32:13.461: INFO: RC test-deployment: stopping metric consumer Jan 28 01:32:13.461: INFO: RC test-deployment: stopping CPU consumer Jan 28 01:32:13.461: INFO: RC test-deployment: stopping mem consumer �[1mSTEP�[0m: deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-2226, will wait for the garbage collector to delete the pods Jan 28 01:32:23.688: INFO: Deleting Deployment.apps test-deployment took: 63.880063ms Jan 28 01:32:23.789: INFO: Terminating Deployment.apps test-deployment pods took: 101.194346ms �[1mSTEP�[0m: deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-2226, will wait for the garbage collector to delete the pods Jan 28 01:32:25.891: INFO: Deleting ReplicationController test-deployment-ctrl took: 63.577561ms Jan 28 01:32:25.992: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.572319ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 28 01:32:27.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:32:27.732: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:29.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:31.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:33.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:35.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:37.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:39.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:41.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:43.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:45.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:47.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:49.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:51.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:53.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:55.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:57.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:32:59.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:01.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:03.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:05.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:07.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:09.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:11.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:13.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:15.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:17.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:19.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:21.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:23.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:25.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:27.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:29.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:31.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:33.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:35.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:37.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:39.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:41.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:43.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:45.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:47.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:49.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:51.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:53.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:55.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:57.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:33:59.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:01.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:03.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:05.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:07.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:09.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:11.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:13.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:15.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:17.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:19.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:21.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:23.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:25.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:27.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:29.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:31.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:33.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:35.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:37.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:39.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:41.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:43.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:45.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:47.801: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:49.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:51.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:53.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:55.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:57.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:34:59.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:01.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:03.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:05.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:07.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:09.813: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:11.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:13.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:15.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:17.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:19.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:21.799: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:23.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:25.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:27.800: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:27.866: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:27.866: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-2226" for this suite. �[91m�[1m• Failure in Spec Teardown (AfterEach) [840.886 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m �[91m�[1m[Serial] [Slow] Deployment [AfterEach]�[0m �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:38�[0m Should scale from 5 pods to 3 pods and from 3 to 1 �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:43�[0m �[91mJan 28 01:35:27.867: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","total":61,"completed":30,"skipped":3780,"failed":6,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:35:27.933: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] test/e2e/framework/framework.go:652 Jan 28 01:35:28.429: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure �[1mSTEP�[0m: create the rc1 �[1mSTEP�[0m: create the rc2 �[1mSTEP�[0m: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[1mSTEP�[0m: delete the rc simpletest-rc-to-be-deleted �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: Gathering metrics Jan 28 01:35:40.999: INFO: The status of Pod kube-controller-manager-capz-conf-cdfcgm-control-plane-t22kx is Running (Ready = true) Jan 28 01:35:41.522: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 28 01:35:41.522: INFO: Deleting pod "simpletest-rc-to-be-deleted-2bspn" in namespace "gc-2256" Jan 28 01:35:41.591: INFO: Deleting pod "simpletest-rc-to-be-deleted-2drng" in namespace "gc-2256" Jan 28 01:35:41.661: INFO: Deleting pod "simpletest-rc-to-be-deleted-2lmft" in namespace "gc-2256" Jan 28 01:35:41.730: INFO: Deleting pod "simpletest-rc-to-be-deleted-2vxxw" in namespace "gc-2256" Jan 28 01:35:41.796: INFO: Deleting pod "simpletest-rc-to-be-deleted-4425r" in namespace "gc-2256" Jan 28 01:35:41.867: INFO: Deleting pod "simpletest-rc-to-be-deleted-5cf68" in namespace "gc-2256" Jan 28 01:35:41.935: INFO: Deleting pod "simpletest-rc-to-be-deleted-5s5wk" in namespace "gc-2256" Jan 28 01:35:42.002: INFO: Deleting pod "simpletest-rc-to-be-deleted-5xq9z" in namespace "gc-2256" Jan 28 01:35:42.072: INFO: Deleting pod "simpletest-rc-to-be-deleted-627p7" in namespace "gc-2256" Jan 28 01:35:42.141: INFO: Deleting pod "simpletest-rc-to-be-deleted-65wgh" in namespace "gc-2256" Jan 28 01:35:42.237: INFO: Deleting pod "simpletest-rc-to-be-deleted-72n5m" in namespace "gc-2256" Jan 28 01:35:42.305: INFO: Deleting pod "simpletest-rc-to-be-deleted-78g4t" in namespace "gc-2256" Jan 28 01:35:42.374: INFO: Deleting pod "simpletest-rc-to-be-deleted-7frh2" in namespace "gc-2256" Jan 28 01:35:42.443: INFO: Deleting pod "simpletest-rc-to-be-deleted-7pt9z" in namespace "gc-2256" Jan 28 01:35:42.522: INFO: Deleting pod "simpletest-rc-to-be-deleted-7vs6r" in namespace "gc-2256" Jan 28 01:35:42.591: INFO: Deleting pod "simpletest-rc-to-be-deleted-95wmm" in namespace "gc-2256" Jan 28 01:35:42.659: INFO: Deleting pod "simpletest-rc-to-be-deleted-9fxs9" in namespace "gc-2256" Jan 28 01:35:42.737: INFO: Deleting pod "simpletest-rc-to-be-deleted-b97zq" in namespace "gc-2256" Jan 28 01:35:42.814: INFO: Deleting pod "simpletest-rc-to-be-deleted-bmprz" in namespace "gc-2256" Jan 28 01:35:42.883: INFO: Deleting pod "simpletest-rc-to-be-deleted-c9vtz" in namespace "gc-2256" Jan 28 01:35:42.950: INFO: Deleting pod "simpletest-rc-to-be-deleted-ccpdm" in namespace "gc-2256" Jan 28 01:35:43.017: INFO: Deleting pod "simpletest-rc-to-be-deleted-dfhg6" in namespace "gc-2256" Jan 28 01:35:43.085: INFO: Deleting pod "simpletest-rc-to-be-deleted-dnlq9" in namespace "gc-2256" Jan 28 01:35:43.156: INFO: Deleting pod "simpletest-rc-to-be-deleted-fmhms" in namespace "gc-2256" Jan 28 01:35:43.223: INFO: Deleting pod "simpletest-rc-to-be-deleted-fv558" in namespace "gc-2256" Jan 28 01:35:43.290: INFO: Deleting pod "simpletest-rc-to-be-deleted-gn9z6" in namespace "gc-2256" Jan 28 01:35:43.358: INFO: Deleting pod "simpletest-rc-to-be-deleted-gz485" in namespace "gc-2256" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 28 01:35:43.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:35:43.493: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:45.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:47.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:49.562: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:51.558: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:53.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:55.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:57.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:35:59.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:01.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:03.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:05.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:07.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:09.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:11.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:13.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:15.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:17.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:19.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:21.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:23.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:25.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:27.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:29.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:31.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:33.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:35.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:37.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:39.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:41.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:43.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:45.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:47.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:49.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:51.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:53.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:55.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:57.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:36:59.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:01.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:03.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:05.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:07.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:09.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:11.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:13.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:15.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:17.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:19.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:21.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:23.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:25.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:27.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:29.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:31.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:33.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:35.562: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:37.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:39.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:41.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:43.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:45.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:47.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:49.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:51.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:53.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:55.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:57.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:37:59.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:01.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:03.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:05.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:07.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:09.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:11.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:13.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:15.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:17.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:19.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:21.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:23.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:25.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:27.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:29.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:31.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:33.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:35.559: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:37.561: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:39.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:41.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:43.560: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:43.625: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:43.626: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "gc-2256" for this suite. �[91m�[1m• Failure in Spec Teardown (AfterEach) [195.758 seconds]�[0m [sig-api-machinery] Garbage collector �[90mtest/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] [AfterEach]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 28 01:38:43.626: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":61,"completed":30,"skipped":3834,"failed":7,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates basic preemption works [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:38:43.691: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 28 01:38:44.312: INFO: Waiting up to 1m0s for all nodes to be ready Jan 28 01:38:44.377: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:44.447: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:46.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:46.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:48.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:48.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:50.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:50.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:52.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:52.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:54.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:54.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:56.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:56.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:58.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:38:58.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:00.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:00.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:02.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:02.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:04.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:04.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:06.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:06.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:08.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:08.587: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:10.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:10.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:12.515: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:12.585: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:14.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:14.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:16.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:16.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:18.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:18.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:20.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:20.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:22.515: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:22.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:24.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:24.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:26.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:26.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:28.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:28.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:30.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:30.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:32.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:32.585: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:34.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:34.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:36.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:36.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:38.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:38.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:40.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:40.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:42.514: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:42.584: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:44.513: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:44.583: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:44.648: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:44.718: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:44.785: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:39:44.785: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Create pods that use 4/5 of node resources. Jan 28 01:39:44.985: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 28 01:39:45.051: INFO: Created pod: pod0-1-sched-preemption-medium-priority �[1mSTEP�[0m: Wait for pods to be scheduled. �[1mSTEP�[0m: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 28 01:40:05.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:40:05.676: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:07.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:09.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:11.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:13.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:15.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:17.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:19.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:21.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:23.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:25.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:27.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:29.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:31.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:33.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:35.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:37.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:39.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:41.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:43.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:45.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:47.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:49.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:51.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:53.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:55.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:57.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:40:59.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:01.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:03.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:05.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:07.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:09.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:11.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:13.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:15.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:17.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:19.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:21.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:23.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:25.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:27.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:29.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:31.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:33.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:35.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:37.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:39.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:41.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:43.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:45.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:47.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:49.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:51.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:53.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:55.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:57.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:41:59.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:01.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:03.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:05.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:07.746: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:09.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:11.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:13.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:15.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:17.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:19.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:21.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:23.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:25.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:27.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:29.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:31.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:33.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:35.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:37.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:39.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:41.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:43.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:45.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:47.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:49.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:51.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:53.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:55.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:57.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:42:59.745: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:43:01.743: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:43:03.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:43:05.744: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:43:05.811: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:43:05.811: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "sched-preemption-1397" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[91m�[1m• Failure in Spec Teardown (AfterEach) [262.445 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[90mtest/e2e/scheduling/framework.go:40�[0m �[91m�[1mvalidates basic preemption works [Conformance] [AfterEach]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 28 01:43:05.811: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":61,"completed":30,"skipped":3857,"failed":8,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:43:06.137: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Jan 28 01:45:07.448: INFO: Successfully updated pod "var-expansion-ccba8b63-57ee-4e0a-b239-e948a816f345" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 28 01:45:19.572: INFO: Deleting pod "var-expansion-ccba8b63-57ee-4e0a-b239-e948a816f345" in namespace "var-expansion-3107" Jan 28 01:45:19.637: INFO: Wait up to 5m0s for pod "var-expansion-ccba8b63-57ee-4e0a-b239-e948a816f345" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 28 01:45:25.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:45:25.827: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:27.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:29.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:31.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:33.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:35.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:37.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:39.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:41.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:43.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:45.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:47.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:49.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:51.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:53.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:55.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:57.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:45:59.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:01.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:03.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:05.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:07.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:09.893: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:11.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:13.896: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:15.893: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:17.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:19.896: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:21.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:23.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:25.896: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:27.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:29.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:31.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:33.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:35.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:37.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:39.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:41.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:43.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:45.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:47.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:49.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:51.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:53.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:55.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:57.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:46:59.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:01.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:03.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:05.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:07.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:09.896: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:11.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:13.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:15.896: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:17.896: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:19.897: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:21.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:23.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:25.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:27.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:29.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:31.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:33.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:35.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:37.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:39.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:41.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:43.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:45.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:47.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:49.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:51.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:53.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:55.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:57.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:47:59.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:01.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:03.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:05.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:07.896: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:09.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:11.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:13.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:15.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:17.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:19.895: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:21.894: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:23.893: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:25.896: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:25.962: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:25.963: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "var-expansion-3107" for this suite. �[91m�[1m• Failure in Spec Teardown (AfterEach) [319.892 seconds]�[0m [sig-node] Variable Expansion �[90mtest/e2e/common/node/framework.go:23�[0m �[91m�[1mshould verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] [AfterEach]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 28 01:48:25.963: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":61,"completed":30,"skipped":3888,"failed":9,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","[sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates lower priority pod preemption by critical pod [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:48:26.037: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 28 01:48:26.659: INFO: Waiting up to 1m0s for all nodes to be ready Jan 28 01:48:26.725: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:26.795: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:28.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:28.931: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:30.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:30.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:32.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:32.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:34.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:34.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:36.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:36.931: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:38.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:38.931: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:40.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:40.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:42.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:42.934: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:44.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:44.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:46.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:46.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:48.860: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:48.930: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:50.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:50.931: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:52.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:52.930: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:54.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:54.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:56.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:56.931: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:58.860: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:48:58.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:00.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:00.933: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:02.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:02.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:04.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:04.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:06.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:06.931: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:08.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:08.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:10.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:10.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:12.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:12.933: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:14.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:14.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:16.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:16.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:18.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:18.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:20.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:20.931: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:22.862: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:22.932: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:24.861: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:24.931: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:26.863: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:26.933: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:26.998: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:27.068: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:27.135: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:27.135: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Create pods that use 4/5 of node resources. Jan 28 01:49:27.329: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 28 01:49:27.396: INFO: Created pod: pod0-1-sched-preemption-medium-priority �[1mSTEP�[0m: Wait for pods to be scheduled. �[1mSTEP�[0m: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 28 01:49:42.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:49:42.158: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:44.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:46.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:48.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:50.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:52.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:54.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:56.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:49:58.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:00.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:02.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:04.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:06.228: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:08.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:10.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:12.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:14.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:16.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:18.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:20.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:22.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:24.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:26.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:28.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:30.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:32.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:34.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:36.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:38.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:40.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:42.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:44.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:46.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:48.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:50.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:52.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:54.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:56.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:50:58.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:00.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:02.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:04.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:06.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:08.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:10.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:12.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:14.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:16.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:18.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:20.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:22.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:24.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:26.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:28.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:30.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:32.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:34.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:36.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:38.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:40.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:42.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:44.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:46.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:48.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:50.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:52.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:54.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:56.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:51:58.228: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:00.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:02.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:04.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:06.231: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:08.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:10.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:12.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:14.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:16.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:18.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:20.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:22.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:24.228: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:26.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:28.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:30.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:32.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:34.225: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:36.226: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:38.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:40.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:42.227: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:42.293: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:52:42.293: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "sched-preemption-1593" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[91m�[1m• Failure in Spec Teardown (AfterEach) [256.580 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[90mtest/e2e/scheduling/framework.go:40�[0m �[91m�[1mvalidates lower priority pod preemption by critical pod [Conformance] [AfterEach]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 28 01:52:42.293: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":61,"completed":30,"skipped":4183,"failed":10,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","[sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould run and stop simple daemon [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:52:42.618: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 Jan 28 01:52:43.178: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure [It] should run and stop simple daemon [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating simple DaemonSet "daemon-set" �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 28 01:52:43.371: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:43.371: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:43.432: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:43.433: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:44.500: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:44.500: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:44.561: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:44.561: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:45.499: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:45.499: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:45.561: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:45.561: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:46.498: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:46.499: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:46.560: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:46.560: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:47.499: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:47.499: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:47.560: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:47.560: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:48.500: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:48.500: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:48.562: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 28 01:52:48.562: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set �[1mSTEP�[0m: Stop a daemon pod, check that the daemon pod is revived. Jan 28 01:52:48.820: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:48.821: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:48.883: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:48.883: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:49.949: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:49.949: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:50.011: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:50.011: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:50.950: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:50.950: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:51.011: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:51.011: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:51.949: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:51.950: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:52.011: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:52.011: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:52.949: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:52.949: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:53.011: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:53.011: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:53.950: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:53.950: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:54.011: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:54.011: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:54.950: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:54.950: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:55.012: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:55.012: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:55.949: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:55.950: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:56.011: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:56.011: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:56.950: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:56.950: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:57.011: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:57.011: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:57.949: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:57.949: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:58.010: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:52:58.010: INFO: Node capz-conf-mpgmr is running 0 daemon pod, expected 1 Jan 28 01:52:58.950: INFO: DaemonSet pods can't tolerate node capz-conf-cdfcgm-control-plane-t22kx with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 28 01:52:58.950: INFO: DaemonSet pods can't tolerate node capz-conf-x4p77 with taints [{Key:node.kubernetes.io/not-ready Value: Effect:NoSchedule TimeAdded:2023-01-28 00:57:26 +0000 UTC} {Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2023-01-28 00:57:27 +0000 UTC}], skip checking this node Jan 28 01:52:59.011: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 28 01:52:59.011: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-841, will wait for the garbage collector to delete the pods Jan 28 01:52:59.299: INFO: Deleting DaemonSet.extensions daemon-set took: 63.828855ms Jan 28 01:52:59.400: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.05026ms Jan 28 01:53:04.562: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 28 01:53:04.562: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 28 01:53:04.623: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"34917"},"items":null} Jan 28 01:53:04.684: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"34917"},"items":null} Jan 28 01:53:04.751: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 28 01:53:04.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:53:04.879: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:06.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:08.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:10.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:12.948: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:14.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:16.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:18.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:20.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:22.948: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:24.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:26.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:28.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:30.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:32.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:34.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:36.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:38.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:40.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:42.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:44.948: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:46.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:48.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:50.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:52.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:54.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:56.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:53:58.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:00.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:02.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:04.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:06.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:08.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:10.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:12.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:14.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:16.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:18.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:20.948: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:22.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:24.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:26.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:28.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:30.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:32.945: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:34.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:36.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:38.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:40.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:42.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:44.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:46.949: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:48.949: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:50.948: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:52.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:54.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:56.948: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:54:58.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:00.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:02.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:04.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:06.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:08.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:10.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:12.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:14.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:16.948: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:18.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:20.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:22.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:24.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:26.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:28.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:30.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:32.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:34.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:36.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:38.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:40.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:42.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:44.945: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:46.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:48.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:50.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:52.948: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:54.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:56.948: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:55:58.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:00.946: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:02.947: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:04.948: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:05.014: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:05.014: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "daemonsets-841" for this suite. �[91m�[1m• Failure in Spec Teardown (AfterEach) [202.462 seconds]�[0m [sig-apps] Daemon set [Serial] �[90mtest/e2e/apps/framework.go:23�[0m �[91m�[1mshould run and stop simple daemon [Conformance] [AfterEach]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 28 01:56:05.014: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":61,"completed":30,"skipped":4277,"failed":11,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","[sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould fail substituting values in a volume subpath with absolute path [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:56:05.083: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/framework/framework.go:652 Jan 28 01:56:09.706: INFO: Deleting pod "var-expansion-78a6ac4e-9f92-461e-8836-6ccc9e7d9cf7" in namespace "var-expansion-2320" Jan 28 01:56:09.771: INFO: Wait up to 5m0s for pod "var-expansion-78a6ac4e-9f92-461e-8836-6ccc9e7d9cf7" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 28 01:56:13.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 01:56:13.967: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:16.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:18.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:20.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:22.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:24.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:26.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:28.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:30.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:32.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:34.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:36.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:38.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:40.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:42.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:44.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:46.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:48.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:50.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:52.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:54.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:56.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:56:58.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:00.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:02.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:04.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:06.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:08.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:10.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:12.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:14.036: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:16.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:18.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:20.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:22.033: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:24.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:26.033: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:28.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:30.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:32.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:34.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:36.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:38.040: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:40.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:42.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:44.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:46.037: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:48.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:50.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:52.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:54.036: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:56.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:57:58.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:00.036: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:02.036: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:04.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:06.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:08.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:10.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:12.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:14.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:16.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:18.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:20.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:22.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:24.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:26.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:28.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:30.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:32.036: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:34.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:36.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:38.036: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:40.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:42.036: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:44.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:46.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:48.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:50.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:52.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:54.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:56.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:58:58.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:59:00.036: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:59:02.033: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:59:04.033: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:59:06.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:59:08.036: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:59:10.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:59:12.035: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:59:14.034: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:59:14.100: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 01:59:14.101: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "var-expansion-2320" for this suite. �[91m�[1m• Failure in Spec Teardown (AfterEach) [189.083 seconds]�[0m [sig-node] Variable Expansion �[90mtest/e2e/common/node/framework.go:23�[0m �[91m�[1mshould fail substituting values in a volume subpath with absolute path [Slow] [Conformance] [AfterEach]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 28 01:59:14.101: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":61,"completed":30,"skipped":4372,"failed":12,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","[sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] ReplicaSet�[0m �[1mShould scale from 1 pod to 3 pods and from 3 to 5�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:50�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 01:59:14.168: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 1 pod to 3 pods and from 3 to 5 test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 �[1mSTEP�[0m: Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-7782 �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-7782 I0128 01:59:14.742836 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-7782, replica count: 1 I0128 01:59:24.845204 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: Running controller �[1mSTEP�[0m: creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-7782 I0128 01:59:24.983160 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-7782, replica count: 1 I0128 01:59:35.084354 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 28 01:59:40.084: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 28 01:59:40.146: INFO: RC rs: consume 250 millicores in total Jan 28 01:59:40.146: INFO: RC rs: sending request to consume 0 millicores Jan 28 01:59:40.146: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=0&requestSizeMillicores=100 } Jan 28 01:59:40.211: INFO: RC rs: setting consumption to 250 millicores in total Jan 28 01:59:40.211: INFO: RC rs: consume 0 MB in total Jan 28 01:59:40.211: INFO: RC rs: sending request to consume 0 MB Jan 28 01:59:40.211: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 01:59:40.275: INFO: RC rs: setting consumption to 0 MB in total Jan 28 01:59:40.275: INFO: RC rs: consume custom metric 0 in total Jan 28 01:59:40.275: INFO: RC rs: setting bump of metric QPS to 0 in total Jan 28 01:59:40.275: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 28 01:59:40.275: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 01:59:40.459: INFO: waiting for 3 replicas (current: 1) Jan 28 02:00:00.523: INFO: waiting for 3 replicas (current: 1) Jan 28 02:00:10.211: INFO: RC rs: sending request to consume 250 millicores Jan 28 02:00:10.211: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 28 02:00:10.275: INFO: RC rs: sending request to consume 0 MB Jan 28 02:00:10.276: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 02:00:10.339: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 28 02:00:10.339: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 02:00:20.523: INFO: waiting for 3 replicas (current: 1) Jan 28 02:00:40.522: INFO: waiting for 3 replicas (current: 3) Jan 28 02:00:40.522: INFO: RC rs: consume 700 millicores in total Jan 28 02:00:40.522: INFO: RC rs: setting consumption to 700 millicores in total Jan 28 02:00:40.583: INFO: waiting for 5 replicas (current: 3) Jan 28 02:00:43.291: INFO: RC rs: sending request to consume 700 millicores Jan 28 02:00:43.291: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 28 02:00:43.291: INFO: RC rs: sending request to consume 0 MB Jan 28 02:00:43.291: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 28 02:00:43.291: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 02:00:43.291: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 02:01:00.646: INFO: waiting for 5 replicas (current: 4) Jan 28 02:01:13.355: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 28 02:01:13.355: INFO: RC rs: sending request to consume 0 MB Jan 28 02:01:13.356: INFO: ConsumeMem URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 28 02:01:13.355: INFO: ConsumeCustomMetric URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 28 02:01:16.368: INFO: RC rs: sending request to consume 700 millicores Jan 28 02:01:16.368: INFO: ConsumeCPU URL: {https capz-conf-cdfcgm-9e300c83.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7782/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 28 02:01:20.646: INFO: waiting for 5 replicas (current: 5) �[1mSTEP�[0m: Removing consuming RC rs Jan 28 02:01:20.712: INFO: RC rs: stopping metric consumer Jan 28 02:01:20.712: INFO: RC rs: stopping mem consumer Jan 28 02:01:20.712: INFO: RC rs: stopping CPU consumer �[1mSTEP�[0m: deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-7782, will wait for the garbage collector to delete the pods Jan 28 02:01:31.093: INFO: Deleting ReplicaSet.apps rs took: 65.319922ms Jan 28 02:01:31.193: INFO: Terminating ReplicaSet.apps rs pods took: 100.410924ms �[1mSTEP�[0m: deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-7782, will wait for the garbage collector to delete the pods Jan 28 02:01:35.602: INFO: Deleting ReplicationController rs-ctrl took: 66.261652ms Jan 28 02:01:35.703: INFO: Terminating ReplicationController rs-ctrl pods took: 101.164505ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 28 02:01:37.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 02:01:37.750: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:39.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:41.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:43.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:45.819: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:47.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:49.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:51.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:53.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:55.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:57.819: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:01:59.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:01.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:03.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:05.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:07.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:09.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:11.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:13.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:15.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:17.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:19.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:21.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:23.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:25.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:27.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:29.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:31.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:33.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:35.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:37.816: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:39.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:41.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:43.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:45.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:47.816: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:49.816: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:51.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:53.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:55.816: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:57.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:02:59.816: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:01.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:03.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:05.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:07.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:09.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:11.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:13.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:15.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:17.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:19.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:21.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:23.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:25.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:27.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:29.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:31.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:33.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:35.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:37.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:39.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:41.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:43.816: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:45.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:47.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:49.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:51.816: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:53.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:55.819: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:57.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:03:59.820: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:01.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:03.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:05.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:07.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:09.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:11.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:13.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:15.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:17.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:19.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:21.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:23.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:25.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:27.819: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:29.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:31.817: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:33.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:35.818: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:37.819: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:37.885: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:04:37.885: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-7782" for this suite. �[91m�[1m• Failure in Spec Teardown (AfterEach) [323.784 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m �[91m�[1m[Serial] [Slow] ReplicaSet [AfterEach]�[0m �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:48�[0m Should scale from 1 pod to 3 pods and from 3 to 5 �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:50�[0m �[91mJan 28 02:04:37.886: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5","total":61,"completed":30,"skipped":4419,"failed":13,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","[sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Density [Serial] [Slow]�[0m �[90mcreate a batch of pods�[0m �[1mlatency/resource should be within limit when create 10 pods with 0s interval�[0m �[37mtest/e2e/windows/density.go:68�[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 02:04:37.959: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename density-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] latency/resource should be within limit when create 10 pods with 0s interval test/e2e/windows/density.go:68 �[1mSTEP�[0m: Creating a batch of pods �[1mSTEP�[0m: Waiting for all Pods to be observed by the watch... Jan 28 02:04:58.459: INFO: Waiting for pod test-0f346d90-61ff-4872-957c-d688a3dc5d3b to disappear Jan 28 02:04:58.462: INFO: Waiting for pod test-347d7bb4-9f4d-4fbe-b9ec-f0e35c6abb4d to disappear Jan 28 02:04:58.462: INFO: Waiting for pod test-f23b10b6-fb22-4506-87fd-920bc66b5d67 to disappear Jan 28 02:04:58.463: INFO: Waiting for pod test-3f14ab9c-3665-49b0-ac6e-c8c71ede9489 to disappear Jan 28 02:04:58.464: INFO: Waiting for pod test-89ccc582-2988-41f8-9dc2-24d08e345ce7 to disappear Jan 28 02:04:58.521: INFO: Waiting for pod test-7ac71396-89f0-4dc3-a22d-9ace268c2159 to disappear Jan 28 02:04:58.549: INFO: Pod test-3f14ab9c-3665-49b0-ac6e-c8c71ede9489 still exists Jan 28 02:04:58.549: INFO: Pod test-347d7bb4-9f4d-4fbe-b9ec-f0e35c6abb4d still exists Jan 28 02:04:58.550: INFO: Pod test-89ccc582-2988-41f8-9dc2-24d08e345ce7 still exists Jan 28 02:04:58.555: INFO: Waiting for pod test-440b1877-d185-4731-b5a1-724f3af0a1a8 to disappear Jan 28 02:04:58.556: INFO: Waiting for pod test-04dc9c64-f952-4f69-ab0c-ea7f629b3012 to disappear Jan 28 02:04:58.557: INFO: Waiting for pod test-a63b810c-bdde-4bf3-abe3-ef7425b40ac1 to disappear Jan 28 02:04:58.558: INFO: Waiting for pod test-f4e20dda-7517-432c-8d19-def6283eb503 to disappear Jan 28 02:04:58.559: INFO: Pod test-0f346d90-61ff-4872-957c-d688a3dc5d3b still exists Jan 28 02:04:58.566: INFO: Pod test-f23b10b6-fb22-4506-87fd-920bc66b5d67 still exists Jan 28 02:04:58.589: INFO: Pod test-7ac71396-89f0-4dc3-a22d-9ace268c2159 still exists Jan 28 02:04:58.624: INFO: Pod test-04dc9c64-f952-4f69-ab0c-ea7f629b3012 still exists Jan 28 02:04:58.629: INFO: Pod test-440b1877-d185-4731-b5a1-724f3af0a1a8 still exists Jan 28 02:04:58.649: INFO: Pod test-f4e20dda-7517-432c-8d19-def6283eb503 still exists Jan 28 02:04:58.651: INFO: Pod test-a63b810c-bdde-4bf3-abe3-ef7425b40ac1 still exists Jan 28 02:05:28.550: INFO: Waiting for pod test-347d7bb4-9f4d-4fbe-b9ec-f0e35c6abb4d to disappear Jan 28 02:05:28.550: INFO: Waiting for pod test-89ccc582-2988-41f8-9dc2-24d08e345ce7 to disappear Jan 28 02:05:28.550: INFO: Waiting for pod test-3f14ab9c-3665-49b0-ac6e-c8c71ede9489 to disappear Jan 28 02:05:28.560: INFO: Waiting for pod test-0f346d90-61ff-4872-957c-d688a3dc5d3b to disappear Jan 28 02:05:28.567: INFO: Waiting for pod test-f23b10b6-fb22-4506-87fd-920bc66b5d67 to disappear Jan 28 02:05:28.590: INFO: Waiting for pod test-7ac71396-89f0-4dc3-a22d-9ace268c2159 to disappear Jan 28 02:05:28.612: INFO: Pod test-89ccc582-2988-41f8-9dc2-24d08e345ce7 no longer exists Jan 28 02:05:28.612: INFO: Pod test-3f14ab9c-3665-49b0-ac6e-c8c71ede9489 no longer exists Jan 28 02:05:28.612: INFO: Pod test-347d7bb4-9f4d-4fbe-b9ec-f0e35c6abb4d no longer exists Jan 28 02:05:28.621: INFO: Pod test-0f346d90-61ff-4872-957c-d688a3dc5d3b no longer exists Jan 28 02:05:28.625: INFO: Waiting for pod test-04dc9c64-f952-4f69-ab0c-ea7f629b3012 to disappear Jan 28 02:05:28.628: INFO: Pod test-f23b10b6-fb22-4506-87fd-920bc66b5d67 no longer exists Jan 28 02:05:28.630: INFO: Waiting for pod test-440b1877-d185-4731-b5a1-724f3af0a1a8 to disappear Jan 28 02:05:28.650: INFO: Waiting for pod test-f4e20dda-7517-432c-8d19-def6283eb503 to disappear Jan 28 02:05:28.652: INFO: Waiting for pod test-a63b810c-bdde-4bf3-abe3-ef7425b40ac1 to disappear Jan 28 02:05:28.652: INFO: Pod test-7ac71396-89f0-4dc3-a22d-9ace268c2159 no longer exists Jan 28 02:05:28.687: INFO: Pod test-04dc9c64-f952-4f69-ab0c-ea7f629b3012 no longer exists Jan 28 02:05:28.691: INFO: Pod test-440b1877-d185-4731-b5a1-724f3af0a1a8 no longer exists Jan 28 02:05:28.711: INFO: Pod test-f4e20dda-7517-432c-8d19-def6283eb503 no longer exists Jan 28 02:05:28.712: INFO: Pod test-a63b810c-bdde-4bf3-abe3-ef7425b40ac1 no longer exists [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 28 02:05:28.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 02:05:28.777: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:30.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:32.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:34.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:36.840: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:38.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:40.844: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:42.840: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:44.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:46.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:48.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:50.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:52.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:54.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:56.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:05:58.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:00.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:02.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:04.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:06.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:08.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:10.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:12.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:14.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:16.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:18.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:20.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:22.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:24.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:26.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:28.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:30.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:32.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:34.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:36.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:38.843: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:40.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:42.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:44.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:46.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:48.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:50.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:52.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:54.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:56.845: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:06:58.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:00.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:02.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:04.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:06.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:08.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:10.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:12.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:14.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:16.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:18.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:20.845: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:22.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:24.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:26.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:28.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:30.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:32.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:34.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:36.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:38.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:40.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:42.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:44.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:46.843: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:48.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:50.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:52.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:54.840: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:56.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:07:58.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:00.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:02.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:04.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:06.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:08.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:10.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:12.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:14.840: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:16.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:18.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:20.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:22.842: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:24.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:26.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:28.841: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:29.123: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:29.123: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "density-test-windows-2144" for this suite. �[91m�[1m• Failure in Spec Teardown (AfterEach) [231.231 seconds]�[0m [sig-windows] [Feature:Windows] Density [Serial] [Slow] �[90mtest/e2e/windows/framework.go:27�[0m �[91m�[1mcreate a batch of pods [AfterEach]�[0m �[90mtest/e2e/windows/density.go:47�[0m latency/resource should be within limit when create 10 pods with 0s interval �[90mtest/e2e/windows/density.go:68�[0m �[91mJan 28 02:08:29.123: All nodes should be ready after test, Not ready nodes: ", capz-conf-x4p77"�[0m vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00021f860, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","total":61,"completed":30,"skipped":4672,"failed":14,"failures":["[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","[sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 28 02:08:29.195: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-pred �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 28 02:08:29.627: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 28 02:08:29.691: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:31.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:33.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:35.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:37.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:39.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:41.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:43.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:45.757: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:47.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:49.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:51.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:53.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:55.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:57.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:08:59.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:01.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:03.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:05.757: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:07.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:09.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:11.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:13.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:15.757: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:17.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:19.754: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:21.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:23.755: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:25.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:27.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:29.756: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:30.039: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:30.103: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:09:30.104: INFO: Waiting for terminating namespaces to be deleted... Jan 28 02:09:30.165: INFO: Logging pods the apiserver thinks is on node capz-conf-mpgmr before test Jan 28 02:09:30.231: INFO: calico-node-windows-pkjkv from calico-system started at 2023-01-27 23:28:48 +0000 UTC (2 container statuses recorded) Jan 28 02:09:30.231: INFO: Container calico-node-felix ready: true, restart count 1 Jan 28 02:09:30.231: INFO: Container calico-node-startup ready: true, restart count 0 Jan 28 02:09:30.231: INFO: containerd-logger-7b895 from kube-system started at 2023-01-27 23:28:48 +0000 UTC (1 container statuses recorded) Jan 28 02:09:30.231: INFO: Container containerd-logger ready: true, restart count 0 Jan 28 02:09:30.231: INFO: csi-azuredisk-node-win-jxz5f from kube-system started at 2023-01-28 00:58:57 +0000 UTC (3 container statuses recorded) Jan 28 02:09:30.231: INFO: Container azuredisk ready: true, restart count 0 Jan 28 02:09:30.231: INFO: Container liveness-probe ready: true, restart count 0 Jan 28 02:09:30.231: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 28 02:09:30.231: INFO: csi-proxy-jp5b8 from kube-system started at 2023-01-28 00:58:57 +0000 UTC (1 container statuses recorded) Jan 28 02:09:30.231: INFO: Container csi-proxy ready: true, restart count 0 Jan 28 02:09:30.231: INFO: kube-proxy-windows-bd49q from kube-system started at 2023-01-27 23:28:48 +0000 UTC (1 container statuses recorded) Jan 28 02:09:30.231: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. �[1mSTEP�[0m: Trying to apply a random label on the found node. �[1mSTEP�[0m: verifying the node has the label kubernetes.io/e2e-ad7bb406-f368-4464-9ace-deba60c5065c 95 �[1mSTEP�[0m: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled �[1mSTEP�[0m: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.1.0.5 on the node which pod4 resides and expect not scheduled �[1mSTEP�[0m: removing the label kubernetes.io/e2e-ad7bb406-f368-4464-9ace-deba60c5065c off the node capz-conf-mpgmr �[1mSTEP�[0m: verifying the node doesn't have the label kubernetes.io/e2e-ad7bb406-f368-4464-9ace-deba60c5065c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 28 02:14:45.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 02:14:45.440: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:14:47.505: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:14:49.505: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:14:51.504: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:14:53.504: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:14:55.505: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:14:57.504: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:14:59.505: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:15:01.504: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-28 00:57:26 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-28 00:57:27 +0000 UTC}]. Failure Jan 28 02:15:03.505: INFO: Condition Ready of node capz-conf-x4p77 is false, but Node is tainted by