Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 2 succeeded |
Started | |
Elapsed | 3h38m |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error: <*errors.withStack | 0xc002338de0>: { error: <*errors.withMessage | 0xc002cd62c0>{ cause: <*errors.errorString | 0xc0001776a0>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x33843b9, 0x3612527, 0x193033b, 0x1943e38, 0x14c5741], } Unable to run conformance tests: error container run failed with exit code 1 occurred In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/27/23 22:27:00.669
> Enter [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/27/23 19:11:07.707 INFO: Cluster name is capz-conf-sz5101 STEP: Creating namespace "capz-conf-sz5101" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:11:07.707 Jan 27 19:11:07.707: INFO: starting to create namespace for hosting the "capz-conf-sz5101" test spec INFO: Creating namespace capz-conf-sz5101 INFO: Creating event watcher for namespace "capz-conf-sz5101" < Exit [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/27/23 19:11:07.772 (66ms) > Enter [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/27/23 19:11:07.773 conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/27/23 19:11:07.773 conformance-tests Name | N | Min | Median | Mean | StdDev | Max ======================================================================================== cluster creation [duration] | 1 | 8m0.6161s | 8m0.6161s | 8m0.6161s | 0s | 8m0.6161s INFO: Creating the workload cluster with name "capz-conf-sz5101" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.25.7-rc.0.9+7366fab496852d, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-sz5101 --infrastructure (default) --kubernetes-version v1.25.7-rc.0.9+7366fab496852d --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-ci-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/27/23 19:11:12.613 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/27/23 19:13:12.761 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/27/23 19:13:12.761 Jan 27 19:15:53.034: INFO: getting history for release projectcalico Jan 27 19:15:53.074: INFO: Release projectcalico does not exist, installing it Jan 27 19:15:53.886: INFO: creating 1 resource(s) Jan 27 19:15:53.946: INFO: creating 1 resource(s) Jan 27 19:15:53.995: INFO: creating 1 resource(s) Jan 27 19:15:54.051: INFO: creating 1 resource(s) Jan 27 19:15:54.107: INFO: creating 1 resource(s) Jan 27 19:15:54.172: INFO: creating 1 resource(s) Jan 27 19:15:54.297: INFO: creating 1 resource(s) Jan 27 19:15:54.368: INFO: creating 1 resource(s) Jan 27 19:15:54.422: INFO: creating 1 resource(s) Jan 27 19:15:54.483: INFO: creating 1 resource(s) Jan 27 19:15:54.533: INFO: creating 1 resource(s) Jan 27 19:15:54.578: INFO: creating 1 resource(s) Jan 27 19:15:54.631: INFO: creating 1 resource(s) Jan 27 19:15:54.680: INFO: creating 1 resource(s) Jan 27 19:15:54.730: INFO: creating 1 resource(s) Jan 27 19:15:54.794: INFO: creating 1 resource(s) Jan 27 19:15:54.867: INFO: creating 1 resource(s) Jan 27 19:15:54.943: INFO: creating 1 resource(s) Jan 27 19:15:55.025: INFO: creating 1 resource(s) Jan 27 19:15:55.145: INFO: creating 1 resource(s) Jan 27 19:15:55.416: INFO: creating 1 resource(s) Jan 27 19:15:55.472: INFO: Clearing discovery cache Jan 27 19:15:55.473: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 27 19:15:58.218: INFO: creating 1 resource(s) Jan 27 19:15:58.799: INFO: creating 6 resource(s) Jan 27 19:15:59.453: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/27/23 19:15:59.758 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:15:59.91 Jan 27 19:15:59.910: INFO: starting to wait for deployment to become available Jan 27 19:16:09.985: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.074717366s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/27/23 19:16:11.059 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:16:11.251 Jan 27 19:16:11.251: INFO: starting to wait for deployment to become available Jan 27 19:17:11.621: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m0.369521998s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:17:11.955 Jan 27 19:17:11.955: INFO: starting to wait for deployment to become available Jan 27 19:17:11.992: INFO: Deployment calico-system/calico-typha is now available, took 37.474222ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/27/23 19:17:11.992 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:17:12.325 Jan 27 19:17:12.325: INFO: starting to wait for deployment to become available Jan 27 19:17:22.468: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 10.143159169s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/27/23 19:17:22.468 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:17:22.815 Jan 27 19:17:22.815: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 27 19:17:22.852: INFO: 1 daemonset calico-system/calico-node pods are running, took 37.88657ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:91 @ 01/27/23 19:17:22.852 STEP: waiting for daemonset calico-system/calico-node-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:17:23.125 Jan 27 19:17:23.125: INFO: waiting for daemonset calico-system/calico-node-windows to be complete Jan 27 19:17:23.162: INFO: 0 daemonset calico-system/calico-node-windows pods are running, took 37.729322ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:97 @ 01/27/23 19:17:23.163 STEP: waiting for daemonset kube-system/kube-proxy-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:17:23.43 Jan 27 19:17:23.430: INFO: waiting for daemonset kube-system/kube-proxy-windows to be complete Jan 27 19:17:23.471: INFO: 0 daemonset kube-system/kube-proxy-windows pods are running, took 41.380446ms INFO: Waiting for the first control plane machine managed by capz-conf-sz5101/capz-conf-sz5101-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/27/23 19:17:23.491 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/27/23 19:17:23.497 Jan 27 19:17:23.554: INFO: getting history for release azuredisk-csi-driver-oot Jan 27 19:17:23.592: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 27 19:17:26.749: INFO: creating 1 resource(s) Jan 27 19:17:26.887: INFO: creating 18 resource(s) Jan 27 19:17:27.260: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/27/23 19:17:27.277 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:17:27.435 Jan 27 19:17:27.435: INFO: starting to wait for deployment to become available Jan 27 19:17:57.697: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 30.262202753s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/27/23 19:17:57.698 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:17:57.989 Jan 27 19:17:57.989: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 27 19:17:58.034: INFO: 1 daemonset kube-system/csi-azuredisk-node pods are running, took 44.509927ms STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/27/23 19:17:58.217 Jan 27 19:17:58.217: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 27 19:17:58.254: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 36.792718ms INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-sz5101/capz-conf-sz5101-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/27/23 19:17:58.273 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/27/23 19:17:58.282 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/27/23 19:17:58.31 STEP: Checking all the machines controlled by capz-conf-sz5101-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/27/23 19:17:58.325 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/27/23 19:17:58.336 STEP: Checking all the machines controlled by capz-conf-sz5101-md-win are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/27/23 19:19:08.439 INFO: Waiting for the machine pools to be provisioned INFO: Using repo-list '' for version 'v1.25.7-rc.0.9+7366fab496852d' STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-nodes=1" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--report-prefix=kubetest." "--num-nodes=2" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "-ginkgo.slow-spec-threshold=120s" "-ginkgo.trace=true" "-ginkgo.focus=(\\[sig-windows\\]|\\[sig-scheduling\\].SchedulerPreemption|\\[sig-autoscaling\\].\\[Feature:HPA\\]|\\[sig-apps\\].CronJob).*(\\[Serial\\]|\\[Slow\\])|(\\[Serial\\]|\\[Slow\\]).*(\\[Conformance\\]|\\[NodeConformance\\])|\\[sig-api-machinery\\].Garbage.collector" "-ginkgo.skip=\\[LinuxOnly\\]|\\[Excluded:WindowsDocker\\]|device.plugin.for.Windows" "-ginkgo.flakeAttempts=0" "-ginkgo.progress=true" "-ginkgo.timeout=4h" "-ginkgo.v=true" "-node-os-distro=windows" "-prepull-images=true" "-disable-log-dump=true" "-dump-logs-on-failure=true"] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/27/23 19:19:08.581 I0127 19:19:15.136535 13 e2e.go:116] Starting e2e run "e8b760de-9200-4e0a-a715-d7f3eccab474" on Ginkgo node 1 Jan 27 19:19:15.149: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: �[1m1674847155�[0m - will randomize all specs Will run �[1m70�[0m of �[1m7066�[0m specs �[38;5;243m------------------------------�[0m �[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:76�[0m [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 {"msg":"Test Suite starting","completed":0,"skipped":0,"failed":0} Jan 27 19:19:15.405: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 19:19:15.407: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 27 19:19:15.649: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 27 19:19:15.781: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:15.781: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:15.781: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 27 19:19:15.781: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:15.781: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:15.781: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:15.781: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:15.781: INFO: Jan 27 19:19:17.917: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:17.917: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:17.917: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Jan 27 19:19:17.917: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:17.917: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:17.917: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:17.917: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:17.917: INFO: Jan 27 19:19:19.912: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:19.912: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:19.912: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Jan 27 19:19:19.912: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:19.912: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:19.912: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:19.912: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:19.912: INFO: Jan 27 19:19:21.910: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:21.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:21.910: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Jan 27 19:19:21.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:21.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:21.910: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:21.910: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:21.910: INFO: Jan 27 19:19:23.916: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:23.916: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:23.916: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Jan 27 19:19:23.916: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:23.916: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:23.916: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:23.916: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:23.916: INFO: Jan 27 19:19:25.911: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:25.911: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:25.911: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Jan 27 19:19:25.911: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:25.911: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:25.911: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:25.911: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:25.911: INFO: Jan 27 19:19:27.913: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:27.913: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:27.913: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Jan 27 19:19:27.913: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:27.913: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:27.913: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:27.913: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:27.913: INFO: Jan 27 19:19:29.910: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:29.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:29.910: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Jan 27 19:19:29.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:29.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:29.910: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:29.911: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:29.911: INFO: Jan 27 19:19:31.910: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:31.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:31.910: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Jan 27 19:19:31.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:31.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:31.910: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:31.910: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:31.910: INFO: Jan 27 19:19:33.910: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:33.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:33.910: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Jan 27 19:19:33.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:33.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:33.910: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:33.910: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:33.910: INFO: Jan 27 19:19:35.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:35.910: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Jan 27 19:19:35.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:35.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:35.911: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:35.911: INFO: Jan 27 19:19:37.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:37.910: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Jan 27 19:19:37.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:37.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:37.910: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:37.910: INFO: Jan 27 19:19:39.911: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:39.911: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) Jan 27 19:19:39.911: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:39.911: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:39.911: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:39.911: INFO: Jan 27 19:19:41.909: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:41.909: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) Jan 27 19:19:41.909: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:41.909: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:41.909: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:41.909: INFO: Jan 27 19:19:43.910: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (28 seconds elapsed) Jan 27 19:19:43.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:43.910: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 27 19:19:43.962: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'containerd-logger' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node-win' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-proxy' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-windows' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: Pre-pulling images so that they are cached for the tests. Jan 27 19:19:44.253: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Jan 27 19:19:44.305: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:19:44.356: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 27 19:19:44.356: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 19:19:53.399: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:19:53.451: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 27 19:19:53.451: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 19:20:02.401: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:20:02.449: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 27 19:20:02.449: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 19:20:11.399: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:20:11.449: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 2 Jan 27 19:20:11.449: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Jan 27 19:20:11.449: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Jan 27 19:20:11.493: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:20:11.543: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2: 2 Jan 27 19:20:11.543: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Jan 27 19:20:11.543: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Jan 27 19:20:11.586: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:20:11.634: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 2 Jan 27 19:20:11.634: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Jan 27 19:20:11.634: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Jan 27 19:20:11.676: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:20:11.727: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2: 2 Jan 27 19:20:11.727: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Jan 27 19:20:11.763: INFO: e2e test version: v1.25.7-rc.0.9+7366fab496852d Jan 27 19:20:11.793: INFO: kube-apiserver version: v1.25.7-rc.0.9+7366fab496852d [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Jan 27 19:20:11.793: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 19:20:11.826: INFO: Cluster IP family: ipv4 �[38;5;243m------------------------------�[0m �[38;5;10m[SynchronizedBeforeSuite] PASSED [56.423 seconds]�[0m [SynchronizedBeforeSuite] �[38;5;243mtest/e2e/e2e.go:76�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Jan 27 19:19:15.405: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 19:19:15.407: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 27 19:19:15.649: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 27 19:19:15.781: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:15.781: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:15.781: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 27 19:19:15.781: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:15.781: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:15.781: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:15.781: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:15.781: INFO: Jan 27 19:19:17.917: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:17.917: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:17.917: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Jan 27 19:19:17.917: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:17.917: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:17.917: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:17.917: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:17.917: INFO: Jan 27 19:19:19.912: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:19.912: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:19.912: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Jan 27 19:19:19.912: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:19.912: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:19.912: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:19.912: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:19.912: INFO: Jan 27 19:19:21.910: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:21.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:21.910: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Jan 27 19:19:21.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:21.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:21.910: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:21.910: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:21.910: INFO: Jan 27 19:19:23.916: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:23.916: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:23.916: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Jan 27 19:19:23.916: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:23.916: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:23.916: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:23.916: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:23.916: INFO: Jan 27 19:19:25.911: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:25.911: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:25.911: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Jan 27 19:19:25.911: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:25.911: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:25.911: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:25.911: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:25.911: INFO: Jan 27 19:19:27.913: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:27.913: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:27.913: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Jan 27 19:19:27.913: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:27.913: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:27.913: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:27.913: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotInitialized containers with incomplete status: [init]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:27.913: INFO: Jan 27 19:19:29.910: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:29.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:29.910: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Jan 27 19:19:29.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:29.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:29.910: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:29.911: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:29.911: INFO: Jan 27 19:19:31.910: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:31.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:31.910: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Jan 27 19:19:31.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:31.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:31.910: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:31.910: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:31.910: INFO: Jan 27 19:19:33.910: INFO: The status of Pod csi-azuredisk-node-win-7gwtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:33.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:33.910: INFO: 16 / 18 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Jan 27 19:19:33.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:33.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:33.910: INFO: csi-azuredisk-node-win-7gwtl capz-conf-d9r4r Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:47 +0000 UTC }] Jan 27 19:19:33.910: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:33.910: INFO: Jan 27 19:19:35.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:35.910: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Jan 27 19:19:35.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:35.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:35.911: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:35.911: INFO: Jan 27 19:19:37.910: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:37.910: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Jan 27 19:19:37.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:37.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:37.910: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:37.910: INFO: Jan 27 19:19:39.911: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:39.911: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) Jan 27 19:19:39.911: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:39.911: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:39.911: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:39.911: INFO: Jan 27 19:19:41.909: INFO: The status of Pod csi-azuredisk-node-win-vgkvl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 27 19:19:41.909: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) Jan 27 19:19:41.909: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:41.909: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 19:19:41.909: INFO: csi-azuredisk-node-win-vgkvl capz-conf-7xz7d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:19:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC ContainersNotReady containers with unready status: [liveness-probe node-driver-registrar azuredisk]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 19:18:55 +0000 UTC }] Jan 27 19:19:41.909: INFO: Jan 27 19:19:43.910: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (28 seconds elapsed) Jan 27 19:19:43.910: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 27 19:19:43.910: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 27 19:19:43.962: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'containerd-logger' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node-win' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-proxy' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-windows' (0 seconds elapsed) Jan 27 19:19:43.962: INFO: Pre-pulling images so that they are cached for the tests. Jan 27 19:19:44.253: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Jan 27 19:19:44.305: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:19:44.356: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 27 19:19:44.356: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 19:19:53.399: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:19:53.451: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 27 19:19:53.451: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 19:20:02.401: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:20:02.449: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 27 19:20:02.449: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 19:20:11.399: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:20:11.449: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 2 Jan 27 19:20:11.449: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Jan 27 19:20:11.449: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Jan 27 19:20:11.493: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:20:11.543: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2: 2 Jan 27 19:20:11.543: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Jan 27 19:20:11.543: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Jan 27 19:20:11.586: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:20:11.634: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 2 Jan 27 19:20:11.634: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Jan 27 19:20:11.634: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Jan 27 19:20:11.676: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 19:20:11.727: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2: 2 Jan 27 19:20:11.727: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Jan 27 19:20:11.763: INFO: e2e test version: v1.25.7-rc.0.9+7366fab496852d Jan 27 19:20:11.793: INFO: kube-apiserver version: v1.25.7-rc.0.9+7366fab496852d [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Jan 27 19:20:11.793: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 19:20:11.826: INFO: Cluster IP family: ipv4 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates basic preemption works [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:125�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:20:11.853�[0m Jan 27 19:20:11.853: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/27/23 19:20:11.855�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:20:11.959�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:20:12.02�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 27 19:20:12.190: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 19:21:12.481: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:125 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m01/27/23 19:21:12.513�[0m Jan 27 19:21:12.595: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 27 19:21:12.630: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 27 19:21:12.710: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 27 19:21:12.745: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m01/27/23 19:21:12.745�[0m Jan 27 19:21:12.745: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-7521" to be "running" Jan 27 19:21:12.776: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 31.084603ms Jan 27 19:21:14.808: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063397574s Jan 27 19:21:16.808: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063586591s Jan 27 19:21:18.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064830674s Jan 27 19:21:20.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064803718s Jan 27 19:21:22.814: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068716017s Jan 27 19:21:24.809: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 12.064622402s Jan 27 19:21:26.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 14.064782475s Jan 27 19:21:28.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 16.065448652s Jan 27 19:21:30.808: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 18.063615476s Jan 27 19:21:32.809: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 20.064556264s Jan 27 19:21:34.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065609863s Jan 27 19:21:36.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 24.064772726s Jan 27 19:21:36.810: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Jan 27 19:21:36.810: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-7521" to be "running" Jan 27 19:21:36.842: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 32.757175ms Jan 27 19:21:36.843: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Jan 27 19:21:36.843: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-7521" to be "running" Jan 27 19:21:36.875: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 32.546936ms Jan 27 19:21:38.908: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065233438s Jan 27 19:21:40.908: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.065700571s Jan 27 19:21:40.908: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Jan 27 19:21:40.908: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-7521" to be "running" Jan 27 19:21:40.941: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 32.71005ms Jan 27 19:21:40.941: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a high priority pod that has same requirements as that of lower priority pod �[38;5;243m01/27/23 19:21:40.941�[0m Jan 27 19:21:40.976: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-7521" to be "running" Jan 27 19:21:41.008: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 30.966492ms Jan 27 19:21:43.040: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063186221s Jan 27 19:21:45.041: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064673954s Jan 27 19:21:47.042: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.06522002s Jan 27 19:21:47.042: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 27 19:21:47.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-7521" for this suite. �[38;5;243m01/27/23 19:21:47.245�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","completed":1,"skipped":2,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [95.615 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates basic preemption works [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:125�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:20:11.853�[0m Jan 27 19:20:11.853: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/27/23 19:20:11.855�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:20:11.959�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:20:12.02�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 27 19:20:12.190: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 19:21:12.481: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:125 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m01/27/23 19:21:12.513�[0m Jan 27 19:21:12.595: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 27 19:21:12.630: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 27 19:21:12.710: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 27 19:21:12.745: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m01/27/23 19:21:12.745�[0m Jan 27 19:21:12.745: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-7521" to be "running" Jan 27 19:21:12.776: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 31.084603ms Jan 27 19:21:14.808: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063397574s Jan 27 19:21:16.808: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063586591s Jan 27 19:21:18.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064830674s Jan 27 19:21:20.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064803718s Jan 27 19:21:22.814: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068716017s Jan 27 19:21:24.809: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 12.064622402s Jan 27 19:21:26.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 14.064782475s Jan 27 19:21:28.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 16.065448652s Jan 27 19:21:30.808: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 18.063615476s Jan 27 19:21:32.809: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 20.064556264s Jan 27 19:21:34.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065609863s Jan 27 19:21:36.810: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 24.064772726s Jan 27 19:21:36.810: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Jan 27 19:21:36.810: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-7521" to be "running" Jan 27 19:21:36.842: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 32.757175ms Jan 27 19:21:36.843: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Jan 27 19:21:36.843: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-7521" to be "running" Jan 27 19:21:36.875: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 32.546936ms Jan 27 19:21:38.908: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065233438s Jan 27 19:21:40.908: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.065700571s Jan 27 19:21:40.908: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Jan 27 19:21:40.908: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-7521" to be "running" Jan 27 19:21:40.941: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 32.71005ms Jan 27 19:21:40.941: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a high priority pod that has same requirements as that of lower priority pod �[38;5;243m01/27/23 19:21:40.941�[0m Jan 27 19:21:40.976: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-7521" to be "running" Jan 27 19:21:41.008: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 30.966492ms Jan 27 19:21:43.040: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063186221s Jan 27 19:21:45.041: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064673954s Jan 27 19:21:47.042: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.06522002s Jan 27 19:21:47.042: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 27 19:21:47.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-7521" for this suite. �[38;5;243m01/27/23 19:21:47.245�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mPriorityClass endpoints�[0m �[1mverify PriorityClass endpoints can be operated with different HTTP methods [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:733�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:21:47.471�[0m Jan 27 19:21:47.471: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/27/23 19:21:47.472�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:21:47.592�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:21:47.654�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 27 19:21:47.822: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 19:22:48.115: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:22:48.148�[0m Jan 27 19:22:48.149: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption-path �[38;5;243m01/27/23 19:22:48.15�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:22:48.247�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:22:48.31�[0m [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:733 Jan 27 19:22:48.476: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Jan 27 19:22:48.508: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/framework.go:187 Jan 27 19:22:48.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-path-6281" for this suite. �[38;5;243m01/27/23 19:22:48.72�[0m [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 27 19:22:48.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-5369" for this suite. �[38;5;243m01/27/23 19:22:48.836�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","completed":2,"skipped":38,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [61.599 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m PriorityClass endpoints �[38;5;243mtest/e2e/scheduling/preemption.go:683�[0m verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:733�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:21:47.471�[0m Jan 27 19:21:47.471: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/27/23 19:21:47.472�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:21:47.592�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:21:47.654�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 27 19:21:47.822: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 19:22:48.115: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:22:48.148�[0m Jan 27 19:22:48.149: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption-path �[38;5;243m01/27/23 19:22:48.15�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:22:48.247�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:22:48.31�[0m [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:733 Jan 27 19:22:48.476: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Jan 27 19:22:48.508: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/framework.go:187 Jan 27 19:22:48.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-path-6281" for this suite. �[38;5;243m01/27/23 19:22:48.72�[0m [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 27 19:22:48.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-5369" for this suite. �[38;5;243m01/27/23 19:22:48.836�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould patch a Namespace [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/namespace.go:267�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:22:49.077�[0m Jan 27 19:22:49.077: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/27/23 19:22:49.079�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:22:49.175�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:22:49.237�[0m [It] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:267 �[1mSTEP:�[0m creating a Namespace �[38;5;243m01/27/23 19:22:49.299�[0m �[1mSTEP:�[0m patching the Namespace �[38;5;243m01/27/23 19:22:49.398�[0m �[1mSTEP:�[0m get the Namespace and ensuring it has the label �[38;5;243m01/27/23 19:22:49.433�[0m [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 27 19:22:49.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-4408" for this suite. �[38;5;243m01/27/23 19:22:49.505�[0m �[1mSTEP:�[0m Destroying namespace "nspatchtest-e7686f4a-fed2-49d8-8d52-04e49364d8b4-3653" for this suite. �[38;5;243m01/27/23 19:22:49.542�[0m {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","completed":3,"skipped":120,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [0.500 seconds]�[0m [sig-api-machinery] Namespaces [Serial] �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should patch a Namespace [Conformance] �[38;5;243mtest/e2e/apimachinery/namespace.go:267�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:22:49.077�[0m Jan 27 19:22:49.077: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/27/23 19:22:49.079�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:22:49.175�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:22:49.237�[0m [It] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:267 �[1mSTEP:�[0m creating a Namespace �[38;5;243m01/27/23 19:22:49.299�[0m �[1mSTEP:�[0m patching the Namespace �[38;5;243m01/27/23 19:22:49.398�[0m �[1mSTEP:�[0m get the Namespace and ensuring it has the label �[38;5;243m01/27/23 19:22:49.433�[0m [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 27 19:22:49.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-4408" for this suite. �[38;5;243m01/27/23 19:22:49.505�[0m �[1mSTEP:�[0m Destroying namespace "nspatchtest-e7686f4a-fed2-49d8-8d52-04e49364d8b4-3653" for this suite. �[38;5;243m01/27/23 19:22:49.542�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould have their auto-restart back-off timer reset on image update [Slow][NodeConformance]�[0m �[38;5;243mtest/e2e/common/node/pods.go:675�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:22:49.579�[0m Jan 27 19:22:49.579: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m01/27/23 19:22:49.581�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:22:49.682�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:22:49.743�[0m [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:193 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] test/e2e/common/node/pods.go:675 Jan 27 19:22:49.842: INFO: Waiting up to 5m0s for pod "pod-back-off-image" in namespace "pods-5631" to be "running and ready" Jan 27 19:22:49.873: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 31.191689ms Jan 27 19:22:49.873: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:22:51.907: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064551568s Jan 27 19:22:51.907: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:22:53.907: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065384135s Jan 27 19:22:53.907: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:22:55.906: INFO: Pod "pod-back-off-image": Phase="Running", Reason="", readiness=true. Elapsed: 6.064311261s Jan 27 19:22:55.906: INFO: The phase of Pod pod-back-off-image is Running (Ready = true) Jan 27 19:22:55.906: INFO: Pod "pod-back-off-image" satisfied condition "running and ready" �[1mSTEP:�[0m getting restart delay-0 �[38;5;243m01/27/23 19:23:55.939�[0m Jan 27 19:24:43.494: INFO: getRestartDelay: restartCount = 4, finishedAt=2023-01-27 19:23:57 +0000 UTC restartedAt=2023-01-27 19:24:41 +0000 UTC (44s) �[1mSTEP:�[0m getting restart delay-1 �[38;5;243m01/27/23 19:24:43.495�[0m Jan 27 19:26:14.422: INFO: getRestartDelay: restartCount = 5, finishedAt=2023-01-27 19:24:46 +0000 UTC restartedAt=2023-01-27 19:26:12 +0000 UTC (1m26s) �[1mSTEP:�[0m getting restart delay-2 �[38;5;243m01/27/23 19:26:14.422�[0m Jan 27 19:29:02.554: INFO: getRestartDelay: restartCount = 6, finishedAt=2023-01-27 19:26:17 +0000 UTC restartedAt=2023-01-27 19:29:01 +0000 UTC (2m44s) �[1mSTEP:�[0m updating the image �[38;5;243m01/27/23 19:29:02.554�[0m Jan 27 19:29:03.127: INFO: Successfully updated pod "pod-back-off-image" Jan 27 19:29:13.130: INFO: Waiting up to 5m0s for pod "pod-back-off-image" in namespace "pods-5631" to be "running" Jan 27 19:29:13.163: INFO: Pod "pod-back-off-image": Phase="Running", Reason="", readiness=true. Elapsed: 32.305972ms Jan 27 19:29:13.163: INFO: Pod "pod-back-off-image" satisfied condition "running" �[1mSTEP:�[0m get restart delay after image update �[38;5;243m01/27/23 19:29:13.163�[0m Jan 27 19:29:29.699: INFO: Container's last state is not "Terminated". Jan 27 19:29:30.731: INFO: Container's last state is not "Terminated". Jan 27 19:29:31.765: INFO: Container's last state is not "Terminated". Jan 27 19:29:32.799: INFO: Container's last state is not "Terminated". Jan 27 19:29:33.832: INFO: Container's last state is not "Terminated". Jan 27 19:29:34.865: INFO: Container's last state is not "Terminated". Jan 27 19:29:35.898: INFO: Container's last state is not "Terminated". Jan 27 19:29:36.931: INFO: Container's last state is not "Terminated". Jan 27 19:29:37.965: INFO: Container's last state is not "Terminated". Jan 27 19:29:38.999: INFO: Container's last state is not "Terminated". Jan 27 19:29:40.033: INFO: Container's last state is not "Terminated". Jan 27 19:29:41.068: INFO: Container's last state is not "Terminated". Jan 27 19:29:42.106: INFO: Container's last state is not "Terminated". Jan 27 19:29:43.138: INFO: Container's last state is not "Terminated". Jan 27 19:29:44.178: INFO: Container's last state is not "Terminated". Jan 27 19:29:45.211: INFO: Container's last state is not "Terminated". Jan 27 19:29:46.244: INFO: Container's last state is not "Terminated". Jan 27 19:29:47.277: INFO: Container's last state is not "Terminated". Jan 27 19:29:48.311: INFO: Container's last state is not "Terminated". Jan 27 19:29:49.343: INFO: getRestartDelay: restartCount = 8, finishedAt=2023-01-27 19:29:11 +0000 UTC restartedAt=2023-01-27 19:29:28 +0000 UTC (17s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:187 Jan 27 19:29:49.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "pods-5631" for this suite. �[38;5;243m01/27/23 19:29:49.383�[0m {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","completed":4,"skipped":148,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [419.838 seconds]�[0m [sig-node] Pods �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] �[38;5;243mtest/e2e/common/node/pods.go:675�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:22:49.579�[0m Jan 27 19:22:49.579: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m01/27/23 19:22:49.581�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:22:49.682�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:22:49.743�[0m [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:193 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] test/e2e/common/node/pods.go:675 Jan 27 19:22:49.842: INFO: Waiting up to 5m0s for pod "pod-back-off-image" in namespace "pods-5631" to be "running and ready" Jan 27 19:22:49.873: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 31.191689ms Jan 27 19:22:49.873: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:22:51.907: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064551568s Jan 27 19:22:51.907: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:22:53.907: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065384135s Jan 27 19:22:53.907: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:22:55.906: INFO: Pod "pod-back-off-image": Phase="Running", Reason="", readiness=true. Elapsed: 6.064311261s Jan 27 19:22:55.906: INFO: The phase of Pod pod-back-off-image is Running (Ready = true) Jan 27 19:22:55.906: INFO: Pod "pod-back-off-image" satisfied condition "running and ready" �[1mSTEP:�[0m getting restart delay-0 �[38;5;243m01/27/23 19:23:55.939�[0m Jan 27 19:24:43.494: INFO: getRestartDelay: restartCount = 4, finishedAt=2023-01-27 19:23:57 +0000 UTC restartedAt=2023-01-27 19:24:41 +0000 UTC (44s) �[1mSTEP:�[0m getting restart delay-1 �[38;5;243m01/27/23 19:24:43.495�[0m Jan 27 19:26:14.422: INFO: getRestartDelay: restartCount = 5, finishedAt=2023-01-27 19:24:46 +0000 UTC restartedAt=2023-01-27 19:26:12 +0000 UTC (1m26s) �[1mSTEP:�[0m getting restart delay-2 �[38;5;243m01/27/23 19:26:14.422�[0m Jan 27 19:29:02.554: INFO: getRestartDelay: restartCount = 6, finishedAt=2023-01-27 19:26:17 +0000 UTC restartedAt=2023-01-27 19:29:01 +0000 UTC (2m44s) �[1mSTEP:�[0m updating the image �[38;5;243m01/27/23 19:29:02.554�[0m Jan 27 19:29:03.127: INFO: Successfully updated pod "pod-back-off-image" Jan 27 19:29:13.130: INFO: Waiting up to 5m0s for pod "pod-back-off-image" in namespace "pods-5631" to be "running" Jan 27 19:29:13.163: INFO: Pod "pod-back-off-image": Phase="Running", Reason="", readiness=true. Elapsed: 32.305972ms Jan 27 19:29:13.163: INFO: Pod "pod-back-off-image" satisfied condition "running" �[1mSTEP:�[0m get restart delay after image update �[38;5;243m01/27/23 19:29:13.163�[0m Jan 27 19:29:29.699: INFO: Container's last state is not "Terminated". Jan 27 19:29:30.731: INFO: Container's last state is not "Terminated". Jan 27 19:29:31.765: INFO: Container's last state is not "Terminated". Jan 27 19:29:32.799: INFO: Container's last state is not "Terminated". Jan 27 19:29:33.832: INFO: Container's last state is not "Terminated". Jan 27 19:29:34.865: INFO: Container's last state is not "Terminated". Jan 27 19:29:35.898: INFO: Container's last state is not "Terminated". Jan 27 19:29:36.931: INFO: Container's last state is not "Terminated". Jan 27 19:29:37.965: INFO: Container's last state is not "Terminated". Jan 27 19:29:38.999: INFO: Container's last state is not "Terminated". Jan 27 19:29:40.033: INFO: Container's last state is not "Terminated". Jan 27 19:29:41.068: INFO: Container's last state is not "Terminated". Jan 27 19:29:42.106: INFO: Container's last state is not "Terminated". Jan 27 19:29:43.138: INFO: Container's last state is not "Terminated". Jan 27 19:29:44.178: INFO: Container's last state is not "Terminated". Jan 27 19:29:45.211: INFO: Container's last state is not "Terminated". Jan 27 19:29:46.244: INFO: Container's last state is not "Terminated". Jan 27 19:29:47.277: INFO: Container's last state is not "Terminated". Jan 27 19:29:48.311: INFO: Container's last state is not "Terminated". Jan 27 19:29:49.343: INFO: getRestartDelay: restartCount = 8, finishedAt=2023-01-27 19:29:11 +0000 UTC restartedAt=2023-01-27 19:29:28 +0000 UTC (17s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:187 Jan 27 19:29:49.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "pods-5631" for this suite. �[38;5;243m01/27/23 19:29:49.383�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete pods created by rc when not orphaning [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:312�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:29:49.419�[0m Jan 27 19:29:49.420: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 19:29:49.421�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:29:49.521�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:29:49.582�[0m [It] should delete pods created by rc when not orphaning [Conformance] test/e2e/apimachinery/garbage_collector.go:312 �[1mSTEP:�[0m create the rc �[38;5;243m01/27/23 19:29:49.644�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m01/27/23 19:29:54.716�[0m �[1mSTEP:�[0m wait for all pods to be garbage collected �[38;5;243m01/27/23 19:29:54.752�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 19:29:59.821�[0m Jan 27 19:29:59.940: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 19:29:59.976: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 35.448635ms Jan 27 19:29:59.976: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 19:29:59.976: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 19:30:00.383: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 19:30:00.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-5898" for this suite. �[38;5;243m01/27/23 19:30:00.418�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","completed":5,"skipped":197,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [11.036 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should delete pods created by rc when not orphaning [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:312�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:29:49.419�[0m Jan 27 19:29:49.420: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 19:29:49.421�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:29:49.521�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:29:49.582�[0m [It] should delete pods created by rc when not orphaning [Conformance] test/e2e/apimachinery/garbage_collector.go:312 �[1mSTEP:�[0m create the rc �[38;5;243m01/27/23 19:29:49.644�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m01/27/23 19:29:54.716�[0m �[1mSTEP:�[0m wait for all pods to be garbage collected �[38;5;243m01/27/23 19:29:54.752�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 19:29:59.821�[0m Jan 27 19:29:59.940: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 19:29:59.976: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 35.448635ms Jan 27 19:29:59.976: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 19:29:59.976: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 19:30:00.383: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 19:30:00.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-5898" for this suite. �[38;5;243m01/27/23 19:30:00.418�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould fail substituting values in a volume subpath with absolute path [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/common/node/expansion.go:185�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:30:00.458�[0m Jan 27 19:30:00.458: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/27/23 19:30:00.46�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:30:00.558�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:30:00.619�[0m [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:185 Jan 27 19:30:00.720: INFO: Waiting up to 2m0s for pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae" in namespace "var-expansion-4950" to be "container 0 failed with reason CreateContainerConfigError" Jan 27 19:30:00.751: INFO: Pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae": Phase="Pending", Reason="", readiness=false. Elapsed: 31.46375ms Jan 27 19:30:02.784: INFO: Pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064284775s Jan 27 19:30:04.783: INFO: Pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063441706s Jan 27 19:30:04.783: INFO: Pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae" satisfied condition "container 0 failed with reason CreateContainerConfigError" Jan 27 19:30:04.784: INFO: Deleting pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae" in namespace "var-expansion-4950" Jan 27 19:30:04.821: INFO: Wait up to 5m0s for pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 27 19:30:08.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-4950" for this suite. �[38;5;243m01/27/23 19:30:08.921�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","completed":6,"skipped":214,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [8.500 seconds]�[0m [sig-node] Variable Expansion �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] �[38;5;243mtest/e2e/common/node/expansion.go:185�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:30:00.458�[0m Jan 27 19:30:00.458: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/27/23 19:30:00.46�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:30:00.558�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:30:00.619�[0m [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:185 Jan 27 19:30:00.720: INFO: Waiting up to 2m0s for pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae" in namespace "var-expansion-4950" to be "container 0 failed with reason CreateContainerConfigError" Jan 27 19:30:00.751: INFO: Pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae": Phase="Pending", Reason="", readiness=false. Elapsed: 31.46375ms Jan 27 19:30:02.784: INFO: Pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064284775s Jan 27 19:30:04.783: INFO: Pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063441706s Jan 27 19:30:04.783: INFO: Pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae" satisfied condition "container 0 failed with reason CreateContainerConfigError" Jan 27 19:30:04.784: INFO: Deleting pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae" in namespace "var-expansion-4950" Jan 27 19:30:04.821: INFO: Wait up to 5m0s for pod "var-expansion-d55f7e5f-75e8-48c3-beb8-0651d6b59dae" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 27 19:30:08.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-4950" for this suite. �[38;5;243m01/27/23 19:30:08.921�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould not schedule jobs when suspended [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/apps/cronjob.go:96�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:30:08.963�[0m Jan 27 19:30:08.963: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename cronjob �[38;5;243m01/27/23 19:30:08.964�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:30:09.067�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:30:09.129�[0m [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 �[1mSTEP:�[0m Creating a suspended cronjob �[38;5;243m01/27/23 19:30:09.19�[0m �[1mSTEP:�[0m Ensuring no jobs are scheduled �[38;5;243m01/27/23 19:30:09.227�[0m �[1mSTEP:�[0m Ensuring no job exists by listing jobs explicitly �[38;5;243m01/27/23 19:35:09.293�[0m �[1mSTEP:�[0m Removing cronjob �[38;5;243m01/27/23 19:35:09.324�[0m [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 Jan 27 19:35:09.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "cronjob-8563" for this suite. �[38;5;243m01/27/23 19:35:09.396�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","completed":7,"skipped":266,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [300.469 seconds]�[0m [sig-apps] CronJob �[38;5;243mtest/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[38;5;243mtest/e2e/apps/cronjob.go:96�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:30:08.963�[0m Jan 27 19:30:08.963: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename cronjob �[38;5;243m01/27/23 19:30:08.964�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:30:09.067�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:30:09.129�[0m [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 �[1mSTEP:�[0m Creating a suspended cronjob �[38;5;243m01/27/23 19:30:09.19�[0m �[1mSTEP:�[0m Ensuring no jobs are scheduled �[38;5;243m01/27/23 19:30:09.227�[0m �[1mSTEP:�[0m Ensuring no job exists by listing jobs explicitly �[38;5;243m01/27/23 19:35:09.293�[0m �[1mSTEP:�[0m Removing cronjob �[38;5;243m01/27/23 19:35:09.324�[0m [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 Jan 27 19:35:09.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "cronjob-8563" for this suite. �[38;5;243m01/27/23 19:35:09.396�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicationController�[0m �[1mShould scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:61�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:35:09.44�[0m Jan 27 19:35:09.440: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 19:35:09.442�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:35:09.541�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:35:09.603�[0m [It] Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:61 �[1mSTEP:�[0m Running consuming RC rc via /v1, Kind=ReplicationController with 1 replicas �[38;5;243m01/27/23 19:35:09.665�[0m �[1mSTEP:�[0m creating replication controller rc in namespace horizontal-pod-autoscaling-6923 �[38;5;243m01/27/23 19:35:09.709�[0m I0127 19:35:09.746115 13 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-6923, replica count: 1 I0127 19:35:19.797456 13 runners.go:193] rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 19:35:19.797�[0m �[1mSTEP:�[0m creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-6923 �[38;5;243m01/27/23 19:35:19.84�[0m I0127 19:35:19.878230 13 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-6923, replica count: 1 I0127 19:35:29.932826 13 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 19:35:34.937: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Jan 27 19:35:34.969: INFO: RC rc: consume 250 millicores in total Jan 27 19:35:34.970: INFO: RC rc: setting consumption to 250 millicores in total Jan 27 19:35:34.970: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:35:34.970: INFO: RC rc: consume 0 MB in total Jan 27 19:35:34.970: INFO: RC rc: disabling mem consumption Jan 27 19:35:34.970: INFO: RC rc: consume custom metric 0 in total Jan 27 19:35:34.970: INFO: RC rc: disabling consumption of custom metric QPS Jan 27 19:35:34.970: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:35:35.037: INFO: waiting for 3 replicas (current: 1) Jan 27 19:35:55.072: INFO: waiting for 3 replicas (current: 1) Jan 27 19:36:11.044: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:36:11.044: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:36:15.074: INFO: waiting for 3 replicas (current: 3) Jan 27 19:36:15.106: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:36:15.137: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:1 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e8e4} Jan 27 19:36:25.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:36:25.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e234} Jan 27 19:36:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:36:35.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e2ec} Jan 27 19:36:44.094: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:36:44.094: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:36:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:36:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e5fc} Jan 27 19:36:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:36:55.205: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e71c} Jan 27 19:37:05.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8c74} Jan 27 19:37:14.140: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:37:14.140: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:37:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:15.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8e5c} Jan 27 19:37:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:25.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8f2c} Jan 27 19:37:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:35.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292eb2c} Jan 27 19:37:44.183: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:37:44.184: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:37:45.172: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:45.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292f07c} Jan 27 19:37:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:55.205: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9234} Jan 27 19:38:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292f14c} Jan 27 19:38:14.233: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:38:14.234: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:38:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:15.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26a34} Jan 27 19:38:25.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:25.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d260f4} Jan 27 19:38:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:35.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2636c} Jan 27 19:38:44.275: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:38:44.275: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:38:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f89c4} Jan 27 19:38:55.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:55.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e394} Jan 27 19:39:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d265cc} Jan 27 19:39:14.317: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:39:14.318: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:39:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:15.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2682c} Jan 27 19:39:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:25.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e83c} Jan 27 19:39:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:35.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e904} Jan 27 19:39:44.361: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:39:44.361: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:39:45.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:45.208: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292eac4} Jan 27 19:39:55.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:55.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26d7c} Jan 27 19:40:05.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26fdc} Jan 27 19:40:14.404: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:40:14.404: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:40:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:15.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292f1a4} Jan 27 19:40:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:25.205: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e0a4} Jan 27 19:40:35.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:35.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f81c4} Jan 27 19:40:44.448: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:40:44.448: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:40:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:45.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8e2c} Jan 27 19:40:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:55.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e43c} Jan 27 19:41:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:05.208: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9214} Jan 27 19:41:14.488: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:41:14.488: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:41:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:15.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26444} Jan 27 19:41:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:25.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2662c} Jan 27 19:41:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:35.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26894} Jan 27 19:41:44.532: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:41:44.532: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:41:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9404} Jan 27 19:41:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:55.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e74c} Jan 27 19:42:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:05.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f986c} Jan 27 19:42:14.574: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:42:14.574: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:42:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:15.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9b5c} Jan 27 19:42:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:25.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26b64} Jan 27 19:42:35.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:35.205: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e1ac} Jan 27 19:42:44.617: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:42:44.617: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:42:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8dfc} Jan 27 19:42:55.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:55.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e41c} Jan 27 19:43:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:05.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9044} Jan 27 19:43:14.662: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:43:14.662: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:43:15.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:15.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2642c} Jan 27 19:43:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:25.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2660c} Jan 27 19:43:35.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:35.201: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26814} Jan 27 19:43:44.706: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:43:44.706: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:43:45.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:45.210: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26a0c} Jan 27 19:43:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:55.201: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26bac} Jan 27 19:44:05.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:05.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e80c} Jan 27 19:44:14.748: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:44:14.748: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:44:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:15.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26c7c} Jan 27 19:44:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:25.205: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f95fc} Jan 27 19:44:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:35.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f89ac} Jan 27 19:44:44.793: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:44:44.793: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:44:45.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8e6c} Jan 27 19:44:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:55.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d262ac} Jan 27 19:45:05.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d263fc} Jan 27 19:45:14.834: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:45:14.834: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:45:15.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:15.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2664c} Jan 27 19:45:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:25.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9294} Jan 27 19:45:35.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:35.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e24c} Jan 27 19:45:44.879: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:45:44.879: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:45:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f94fc} Jan 27 19:45:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:55.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26a04} Jan 27 19:46:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:46:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26ad4} Jan 27 19:46:14.923: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:46:14.924: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:46:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:46:15.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26cec} Jan 27 19:46:15.235: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:46:15.270: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26f4c} Jan 27 19:46:15.270: INFO: Number of replicas was stable over 10m0s Jan 27 19:46:15.270: INFO: RC rc: consume 700 millicores in total Jan 27 19:46:15.270: INFO: RC rc: setting consumption to 700 millicores in total Jan 27 19:46:15.302: INFO: waiting for 5 replicas (current: 3) Jan 27 19:46:35.340: INFO: waiting for 5 replicas (current: 3) Jan 27 19:46:44.965: INFO: RC rc: sending request to consume 700 millicores Jan 27 19:46:44.965: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 27 19:46:55.335: INFO: waiting for 5 replicas (current: 3) Jan 27 19:47:15.010: INFO: RC rc: sending request to consume 700 millicores Jan 27 19:47:15.010: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 27 19:47:15.335: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC rc �[38;5;243m01/27/23 19:47:15.371�[0m Jan 27 19:47:15.371: INFO: RC rc: stopping metric consumer Jan 27 19:47:15.371: INFO: RC rc: stopping mem consumer Jan 27 19:47:18.053: INFO: RC rc: stopping CPU consumer �[1mSTEP:�[0m deleting ReplicationController rc in namespace horizontal-pod-autoscaling-6923, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:47:28.053�[0m Jan 27 19:47:28.172: INFO: Deleting ReplicationController rc took: 35.06207ms Jan 27 19:47:28.272: INFO: Terminating ReplicationController rc pods took: 100.542766ms �[1mSTEP:�[0m deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-6923, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:47:30.627�[0m Jan 27 19:47:30.744: INFO: Deleting ReplicationController rc-ctrl took: 34.742583ms Jan 27 19:47:30.845: INFO: Terminating ReplicationController rc-ctrl pods took: 100.68294ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 19:47:32.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-6923" for this suite. �[38;5;243m01/27/23 19:47:32.631�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","completed":8,"skipped":403,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [743.226 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicationController �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:59�[0m Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:61�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:35:09.44�[0m Jan 27 19:35:09.440: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 19:35:09.442�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:35:09.541�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:35:09.603�[0m [It] Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:61 �[1mSTEP:�[0m Running consuming RC rc via /v1, Kind=ReplicationController with 1 replicas �[38;5;243m01/27/23 19:35:09.665�[0m �[1mSTEP:�[0m creating replication controller rc in namespace horizontal-pod-autoscaling-6923 �[38;5;243m01/27/23 19:35:09.709�[0m I0127 19:35:09.746115 13 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-6923, replica count: 1 I0127 19:35:19.797456 13 runners.go:193] rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 19:35:19.797�[0m �[1mSTEP:�[0m creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-6923 �[38;5;243m01/27/23 19:35:19.84�[0m I0127 19:35:19.878230 13 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-6923, replica count: 1 I0127 19:35:29.932826 13 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 19:35:34.937: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Jan 27 19:35:34.969: INFO: RC rc: consume 250 millicores in total Jan 27 19:35:34.970: INFO: RC rc: setting consumption to 250 millicores in total Jan 27 19:35:34.970: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:35:34.970: INFO: RC rc: consume 0 MB in total Jan 27 19:35:34.970: INFO: RC rc: disabling mem consumption Jan 27 19:35:34.970: INFO: RC rc: consume custom metric 0 in total Jan 27 19:35:34.970: INFO: RC rc: disabling consumption of custom metric QPS Jan 27 19:35:34.970: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:35:35.037: INFO: waiting for 3 replicas (current: 1) Jan 27 19:35:55.072: INFO: waiting for 3 replicas (current: 1) Jan 27 19:36:11.044: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:36:11.044: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:36:15.074: INFO: waiting for 3 replicas (current: 3) Jan 27 19:36:15.106: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:36:15.137: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:1 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e8e4} Jan 27 19:36:25.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:36:25.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e234} Jan 27 19:36:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:36:35.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e2ec} Jan 27 19:36:44.094: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:36:44.094: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:36:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:36:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e5fc} Jan 27 19:36:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:36:55.205: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e71c} Jan 27 19:37:05.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8c74} Jan 27 19:37:14.140: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:37:14.140: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:37:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:15.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8e5c} Jan 27 19:37:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:25.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8f2c} Jan 27 19:37:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:35.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292eb2c} Jan 27 19:37:44.183: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:37:44.184: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:37:45.172: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:45.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292f07c} Jan 27 19:37:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:37:55.205: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9234} Jan 27 19:38:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292f14c} Jan 27 19:38:14.233: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:38:14.234: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:38:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:15.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26a34} Jan 27 19:38:25.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:25.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d260f4} Jan 27 19:38:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:35.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2636c} Jan 27 19:38:44.275: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:38:44.275: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:38:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f89c4} Jan 27 19:38:55.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:38:55.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e394} Jan 27 19:39:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d265cc} Jan 27 19:39:14.317: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:39:14.318: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:39:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:15.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2682c} Jan 27 19:39:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:25.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e83c} Jan 27 19:39:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:35.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e904} Jan 27 19:39:44.361: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:39:44.361: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:39:45.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:45.208: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292eac4} Jan 27 19:39:55.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:39:55.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26d7c} Jan 27 19:40:05.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26fdc} Jan 27 19:40:14.404: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:40:14.404: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:40:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:15.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292f1a4} Jan 27 19:40:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:25.205: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e0a4} Jan 27 19:40:35.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:35.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f81c4} Jan 27 19:40:44.448: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:40:44.448: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:40:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:45.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8e2c} Jan 27 19:40:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:40:55.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e43c} Jan 27 19:41:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:05.208: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9214} Jan 27 19:41:14.488: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:41:14.488: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:41:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:15.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26444} Jan 27 19:41:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:25.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2662c} Jan 27 19:41:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:35.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26894} Jan 27 19:41:44.532: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:41:44.532: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:41:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9404} Jan 27 19:41:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:41:55.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e74c} Jan 27 19:42:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:05.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f986c} Jan 27 19:42:14.574: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:42:14.574: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:42:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:15.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9b5c} Jan 27 19:42:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:25.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26b64} Jan 27 19:42:35.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:35.205: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e1ac} Jan 27 19:42:44.617: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:42:44.617: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:42:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8dfc} Jan 27 19:42:55.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:42:55.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e41c} Jan 27 19:43:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:05.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9044} Jan 27 19:43:14.662: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:43:14.662: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:43:15.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:15.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2642c} Jan 27 19:43:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:25.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2660c} Jan 27 19:43:35.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:35.201: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26814} Jan 27 19:43:44.706: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:43:44.706: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:43:45.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:45.210: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26a0c} Jan 27 19:43:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:43:55.201: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26bac} Jan 27 19:44:05.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:05.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e80c} Jan 27 19:44:14.748: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:44:14.748: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:44:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:15.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26c7c} Jan 27 19:44:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:25.205: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f95fc} Jan 27 19:44:35.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:35.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f89ac} Jan 27 19:44:44.793: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:44:44.793: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:44:45.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f8e6c} Jan 27 19:44:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:44:55.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d262ac} Jan 27 19:45:05.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d263fc} Jan 27 19:45:14.834: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:45:14.834: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:45:15.169: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:15.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d2664c} Jan 27 19:45:25.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:25.202: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f9294} Jan 27 19:45:35.171: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:35.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00292e24c} Jan 27 19:45:44.879: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:45:44.879: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:45:45.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:45.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0014f94fc} Jan 27 19:45:55.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:45:55.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26a04} Jan 27 19:46:05.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:46:05.204: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26ad4} Jan 27 19:46:14.923: INFO: RC rc: sending request to consume 250 millicores Jan 27 19:46:14.924: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 19:46:15.170: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:46:15.203: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26cec} Jan 27 19:46:15.235: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 19:46:15.270: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 19:36:05 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003d26f4c} Jan 27 19:46:15.270: INFO: Number of replicas was stable over 10m0s Jan 27 19:46:15.270: INFO: RC rc: consume 700 millicores in total Jan 27 19:46:15.270: INFO: RC rc: setting consumption to 700 millicores in total Jan 27 19:46:15.302: INFO: waiting for 5 replicas (current: 3) Jan 27 19:46:35.340: INFO: waiting for 5 replicas (current: 3) Jan 27 19:46:44.965: INFO: RC rc: sending request to consume 700 millicores Jan 27 19:46:44.965: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 27 19:46:55.335: INFO: waiting for 5 replicas (current: 3) Jan 27 19:47:15.010: INFO: RC rc: sending request to consume 700 millicores Jan 27 19:47:15.010: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6923/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 27 19:47:15.335: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC rc �[38;5;243m01/27/23 19:47:15.371�[0m Jan 27 19:47:15.371: INFO: RC rc: stopping metric consumer Jan 27 19:47:15.371: INFO: RC rc: stopping mem consumer Jan 27 19:47:18.053: INFO: RC rc: stopping CPU consumer �[1mSTEP:�[0m deleting ReplicationController rc in namespace horizontal-pod-autoscaling-6923, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:47:28.053�[0m Jan 27 19:47:28.172: INFO: Deleting ReplicationController rc took: 35.06207ms Jan 27 19:47:28.272: INFO: Terminating ReplicationController rc pods took: 100.542766ms �[1mSTEP:�[0m deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-6923, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:47:30.627�[0m Jan 27 19:47:30.744: INFO: Deleting ReplicationController rc-ctrl took: 34.742583ms Jan 27 19:47:30.845: INFO: Terminating ReplicationController rc-ctrl pods took: 100.68294ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 19:47:32.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-6923" for this suite. �[38;5;243m01/27/23 19:47:32.631�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] �[38;5;243mKubelet stats collection for Windows nodes �[0mwhen running 10 pods�[0m �[1mshould return within 10 seconds�[0m �[38;5;243mtest/e2e/windows/kubelet_stats.go:47�[0m [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:47:32.67�[0m Jan 27 19:47:32.671: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename kubelet-stats-test-windows-serial �[38;5;243m01/27/23 19:47:32.672�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:47:32.77�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:47:32.832�[0m [It] should return within 10 seconds test/e2e/windows/kubelet_stats.go:47 �[1mSTEP:�[0m Selecting a Windows node �[38;5;243m01/27/23 19:47:32.893�[0m Jan 27 19:47:32.927: INFO: Using node: capz-conf-7xz7d �[1mSTEP:�[0m Scheduling 10 pods �[38;5;243m01/27/23 19:47:32.927�[0m Jan 27 19:47:32.968: INFO: Waiting up to 5m0s for pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:32.973: INFO: Waiting up to 5m0s for pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:32.973: INFO: Waiting up to 5m0s for pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:32.973: INFO: Waiting up to 5m0s for pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:32.999: INFO: Waiting up to 5m0s for pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.001: INFO: Waiting up to 5m0s for pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.001: INFO: Waiting up to 5m0s for pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.001: INFO: Waiting up to 5m0s for pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.001: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.330699ms Jan 27 19:47:33.001: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.002: INFO: Waiting up to 5m0s for pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.002: INFO: Waiting up to 5m0s for pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.008: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.957772ms Jan 27 19:47:33.008: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.009: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 35.23034ms Jan 27 19:47:33.009: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.009: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.221348ms Jan 27 19:47:33.009: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.030: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.172587ms Jan 27 19:47:33.030: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.033: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.347385ms Jan 27 19:47:33.033: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.033: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.645657ms Jan 27 19:47:33.033: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.033: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 31.620191ms Jan 27 19:47:33.033: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.033: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 31.058221ms Jan 27 19:47:33.033: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.034: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 31.632013ms Jan 27 19:47:33.034: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.033: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065238445s Jan 27 19:47:35.033: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.040: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067002382s Jan 27 19:47:35.040: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.042: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068647153s Jan 27 19:47:35.042: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.043: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069728212s Jan 27 19:47:35.043: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063308436s Jan 27 19:47:35.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064977052s Jan 27 19:47:35.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066113196s Jan 27 19:47:35.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.067: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066163094s Jan 27 19:47:35.067: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06503093s Jan 27 19:47:35.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.069: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067424602s Jan 27 19:47:35.069: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.037: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069463261s Jan 27 19:47:37.037: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068016084s Jan 27 19:47:37.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068143087s Jan 27 19:47:37.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068855146s Jan 27 19:47:37.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063647616s Jan 27 19:47:37.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.075: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073784457s Jan 27 19:47:37.075: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.075: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074081168s Jan 27 19:47:37.076: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.075: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074265612s Jan 27 19:47:37.076: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.076: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073507867s Jan 27 19:47:37.076: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.077: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07439332s Jan 27 19:47:37.077: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066421354s Jan 27 19:47:39.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067769258s Jan 27 19:47:39.041: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068193035s Jan 27 19:47:39.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068837251s Jan 27 19:47:39.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063377895s Jan 27 19:47:39.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.066: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065263068s Jan 27 19:47:39.067: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064026092s Jan 27 19:47:39.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.067: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065287389s Jan 27 19:47:39.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066307524s Jan 27 19:47:39.068: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065171377s Jan 27 19:47:39.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.036: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067781162s Jan 27 19:47:41.036: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067846044s Jan 27 19:47:41.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068166014s Jan 27 19:47:41.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069217668s Jan 27 19:47:41.043: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063079837s Jan 27 19:47:41.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.066: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065058065s Jan 27 19:47:41.066: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065384577s Jan 27 19:47:41.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064567433s Jan 27 19:47:41.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066709158s Jan 27 19:47:41.068: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065422618s Jan 27 19:47:41.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067359145s Jan 27 19:47:43.036: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.042: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068179408s Jan 27 19:47:43.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.042: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068654762s Jan 27 19:47:43.042: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.043: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069492054s Jan 27 19:47:43.043: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06354547s Jan 27 19:47:43.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.065: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064194424s Jan 27 19:47:43.065: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.066: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06459683s Jan 27 19:47:43.066: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.067: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0653699s Jan 27 19:47:43.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064468653s Jan 27 19:47:43.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0646913s Jan 27 19:47:43.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.066150542s Jan 27 19:47:45.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0687679s Jan 27 19:47:45.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.043: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069284066s Jan 27 19:47:45.043: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.043: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069553631s Jan 27 19:47:45.043: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.065: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.065992803s Jan 27 19:47:45.065: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.06570611s Jan 27 19:47:45.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.064540059s Jan 27 19:47:45.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067016586s Jan 27 19:47:45.068: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.068: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.065929385s Jan 27 19:47:45.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.069: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067400991s Jan 27 19:47:45.069: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.066951253s Jan 27 19:47:47.035: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.06794339s Jan 27 19:47:47.041: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.041: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.068092737s Jan 27 19:47:47.041: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.042: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.069033536s Jan 27 19:47:47.042: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.064067209s Jan 27 19:47:47.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.064958809s Jan 27 19:47:47.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.067: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.066192319s Jan 27 19:47:47.068: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.067250918s Jan 27 19:47:47.069: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.069: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.067342569s Jan 27 19:47:47.069: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.069: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.066537191s Jan 27 19:47:47.069: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.066804974s Jan 27 19:47:49.035: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067436426s Jan 27 19:47:49.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067975358s Jan 27 19:47:49.041: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.068604484s Jan 27 19:47:49.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064228127s Jan 27 19:47:49.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.06447139s Jan 27 19:47:49.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.065034412s Jan 27 19:47:49.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.068: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.066436232s Jan 27 19:47:49.068: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067256231s Jan 27 19:47:49.069: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.069: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067580194s Jan 27 19:47:49.069: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.037: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.068697795s Jan 27 19:47:51.037: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.068171493s Jan 27 19:47:51.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.06855944s Jan 27 19:47:51.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.043: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.069417743s Jan 27 19:47:51.043: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.063862882s Jan 27 19:47:51.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.069: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.066483594s Jan 27 19:47:51.069: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.069: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.066738307s Jan 27 19:47:51.069: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.070: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.068493429s Jan 27 19:47:51.070: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.070: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.068598754s Jan 27 19:47:51.070: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.070: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.069179359s Jan 27 19:47:51.070: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.066081111s Jan 27 19:47:53.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.067763925s Jan 27 19:47:53.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.042: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.068229538s Jan 27 19:47:53.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.069022492s Jan 27 19:47:53.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.06339956s Jan 27 19:47:53.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.066: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.064937506s Jan 27 19:47:53.066: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.06550851s Jan 27 19:47:53.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.066348775s Jan 27 19:47:53.068: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.068: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.065255856s Jan 27 19:47:53.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.06590597s Jan 27 19:47:53.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.066980753s Jan 27 19:47:55.035: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.041: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Running", Reason="", readiness=true. Elapsed: 22.06814731s Jan 27 19:47:55.041: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Running (Ready = true) Jan 27 19:47:55.041: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6" satisfied condition "running and ready" Jan 27 19:47:55.042: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.068646628s Jan 27 19:47:55.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.043: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.069542359s Jan 27 19:47:55.043: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.063575466s Jan 27 19:47:55.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.067: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065345594s Jan 27 19:47:55.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.064402579s Jan 27 19:47:55.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065926931s Jan 27 19:47:55.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.066685732s Jan 27 19:47:55.068: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065465964s Jan 27 19:47:55.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.066066693s Jan 27 19:47:57.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.040: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.066545789s Jan 27 19:47:57.040: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.040: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.067041163s Jan 27 19:47:57.040: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.064376722s Jan 27 19:47:57.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.066: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065011906s Jan 27 19:47:57.066: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.066: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065265635s Jan 27 19:47:57.066: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.068: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065441001s Jan 27 19:47:57.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.068: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.066472911s Jan 27 19:47:57.068: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065706533s Jan 27 19:47:57.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.065807416s Jan 27 19:47:59.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.068042671s Jan 27 19:47:59.041: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.068211529s Jan 27 19:47:59.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.063485323s Jan 27 19:47:59.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.065: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.063755168s Jan 27 19:47:59.065: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.066: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.064626467s Jan 27 19:47:59.066: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.066: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 26.063619426s Jan 27 19:47:59.066: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.066: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.065203288s Jan 27 19:47:59.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 26.06452039s Jan 27 19:47:59.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.06585813s Jan 27 19:48:01.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.042: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Running", Reason="", readiness=true. Elapsed: 28.069077905s Jan 27 19:48:01.042: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Running (Ready = true) Jan 27 19:48:01.042: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4" satisfied condition "running and ready" Jan 27 19:48:01.042: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.068978172s Jan 27 19:48:01.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.064452274s Jan 27 19:48:01.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.070: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.067606059s Jan 27 19:48:01.070: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.070: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.068993786s Jan 27 19:48:01.070: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.071: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.070000232s Jan 27 19:48:01.071: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.072: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.070380148s Jan 27 19:48:01.072: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.072: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.069506732s Jan 27 19:48:01.072: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Running", Reason="", readiness=true. Elapsed: 30.067030966s Jan 27 19:48:03.035: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Running (Ready = true) Jan 27 19:48:03.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5" satisfied condition "running and ready" Jan 27 19:48:03.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.067980654s Jan 27 19:48:03.041: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.063825765s Jan 27 19:48:03.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.066: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.064623953s Jan 27 19:48:03.066: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.066: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.065172357s Jan 27 19:48:03.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.069: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.068163522s Jan 27 19:48:03.069: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.070: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.067508307s Jan 27 19:48:03.070: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.071: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.068732765s Jan 27 19:48:03.071: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.042: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.069109756s Jan 27 19:48:05.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.063968541s Jan 27 19:48:05.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.066: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.064483751s Jan 27 19:48:05.066: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.067: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.065592889s Jan 27 19:48:05.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 32.064629847s Jan 27 19:48:05.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.066017175s Jan 27 19:48:05.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.068: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.065606389s Jan 27 19:48:05.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:07.043: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Running", Reason="", readiness=true. Elapsed: 34.069688293s Jan 27 19:48:07.043: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Running (Ready = true) Jan 27 19:48:07.043: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9" satisfied condition "running and ready" Jan 27 19:48:07.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 34.063947844s Jan 27 19:48:07.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:07.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 34.064909713s Jan 27 19:48:07.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:07.068: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 34.06529987s Jan 27 19:48:07.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:07.068: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Running", Reason="", readiness=true. Elapsed: 34.067071257s Jan 27 19:48:07.068: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Running (Ready = true) Jan 27 19:48:07.068: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7" satisfied condition "running and ready" Jan 27 19:48:07.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.067265378s Jan 27 19:48:07.069: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:07.069: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.067912856s Jan 27 19:48:07.069: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:09.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 36.064151905s Jan 27 19:48:09.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:09.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Running", Reason="", readiness=true. Elapsed: 36.064517918s Jan 27 19:48:09.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Running (Ready = true) Jan 27 19:48:09.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3" satisfied condition "running and ready" Jan 27 19:48:09.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.064283508s Jan 27 19:48:09.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:09.068: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Running", Reason="", readiness=true. Elapsed: 36.06646406s Jan 27 19:48:09.068: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Running (Ready = true) Jan 27 19:48:09.068: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8" satisfied condition "running and ready" Jan 27 19:48:09.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.0666714s Jan 27 19:48:09.068: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:11.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.063778227s Jan 27 19:48:11.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:11.065: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.063407021s Jan 27 19:48:11.065: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:11.066: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Running", Reason="", readiness=true. Elapsed: 38.063424176s Jan 27 19:48:11.066: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Running (Ready = true) Jan 27 19:48:11.066: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2" satisfied condition "running and ready" Jan 27 19:48:13.067: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.06785965s Jan 27 19:48:13.067: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:13.067: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Running", Reason="", readiness=true. Elapsed: 40.066006328s Jan 27 19:48:13.067: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Running (Ready = true) Jan 27 19:48:13.067: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0" satisfied condition "running and ready" Jan 27 19:48:15.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Running", Reason="", readiness=true. Elapsed: 42.064429096s Jan 27 19:48:15.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Running (Ready = true) Jan 27 19:48:15.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1" satisfied condition "running and ready" �[1mSTEP:�[0m Waiting up to 3 minutes for pods to be running �[38;5;243m01/27/23 19:48:15.095�[0m Jan 27 19:48:15.095: INFO: Waiting up to 3m0s for all pods (need at least 10) in namespace 'kubelet-stats-test-windows-serial-8297' to be running and ready Jan 27 19:48:15.194: INFO: 10 / 10 pods in namespace 'kubelet-stats-test-windows-serial-8297' are running and ready (0 seconds elapsed) Jan 27 19:48:15.194: INFO: expected 0 pod replicas in namespace 'kubelet-stats-test-windows-serial-8297', 0 are Running and Ready. �[1mSTEP:�[0m Getting kubelet stats 5 times and checking average duration �[38;5;243m01/27/23 19:48:15.194�[0m Jan 27 19:48:40.909: INFO: Getting kubelet stats for node capz-conf-7xz7d took an average of 141 milliseconds over 5 iterations [AfterEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:187 Jan 27 19:48:40.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "kubelet-stats-test-windows-serial-8297" for this suite. �[38;5;243m01/27/23 19:48:40.945�[0m {"msg":"PASSED [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","completed":9,"skipped":430,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [68.310 seconds]�[0m [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] �[38;5;243mtest/e2e/windows/framework.go:27�[0m Kubelet stats collection for Windows nodes �[38;5;243mtest/e2e/windows/kubelet_stats.go:43�[0m when running 10 pods �[38;5;243mtest/e2e/windows/kubelet_stats.go:45�[0m should return within 10 seconds �[38;5;243mtest/e2e/windows/kubelet_stats.go:47�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:47:32.67�[0m Jan 27 19:47:32.671: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename kubelet-stats-test-windows-serial �[38;5;243m01/27/23 19:47:32.672�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:47:32.77�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:47:32.832�[0m [It] should return within 10 seconds test/e2e/windows/kubelet_stats.go:47 �[1mSTEP:�[0m Selecting a Windows node �[38;5;243m01/27/23 19:47:32.893�[0m Jan 27 19:47:32.927: INFO: Using node: capz-conf-7xz7d �[1mSTEP:�[0m Scheduling 10 pods �[38;5;243m01/27/23 19:47:32.927�[0m Jan 27 19:47:32.968: INFO: Waiting up to 5m0s for pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:32.973: INFO: Waiting up to 5m0s for pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:32.973: INFO: Waiting up to 5m0s for pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:32.973: INFO: Waiting up to 5m0s for pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:32.999: INFO: Waiting up to 5m0s for pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.001: INFO: Waiting up to 5m0s for pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.001: INFO: Waiting up to 5m0s for pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.001: INFO: Waiting up to 5m0s for pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.001: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.330699ms Jan 27 19:47:33.001: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.002: INFO: Waiting up to 5m0s for pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.002: INFO: Waiting up to 5m0s for pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2" in namespace "kubelet-stats-test-windows-serial-8297" to be "running and ready" Jan 27 19:47:33.008: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.957772ms Jan 27 19:47:33.008: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.009: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 35.23034ms Jan 27 19:47:33.009: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.009: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.221348ms Jan 27 19:47:33.009: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.030: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.172587ms Jan 27 19:47:33.030: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.033: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.347385ms Jan 27 19:47:33.033: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.033: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.645657ms Jan 27 19:47:33.033: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.033: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 31.620191ms Jan 27 19:47:33.033: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.033: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 31.058221ms Jan 27 19:47:33.033: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:33.034: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 31.632013ms Jan 27 19:47:33.034: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.033: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065238445s Jan 27 19:47:35.033: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.040: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067002382s Jan 27 19:47:35.040: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.042: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068647153s Jan 27 19:47:35.042: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.043: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069728212s Jan 27 19:47:35.043: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063308436s Jan 27 19:47:35.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064977052s Jan 27 19:47:35.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066113196s Jan 27 19:47:35.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.067: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066163094s Jan 27 19:47:35.067: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06503093s Jan 27 19:47:35.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:35.069: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067424602s Jan 27 19:47:35.069: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.037: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069463261s Jan 27 19:47:37.037: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068016084s Jan 27 19:47:37.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068143087s Jan 27 19:47:37.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068855146s Jan 27 19:47:37.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063647616s Jan 27 19:47:37.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.075: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073784457s Jan 27 19:47:37.075: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.075: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074081168s Jan 27 19:47:37.076: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.075: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074265612s Jan 27 19:47:37.076: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.076: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073507867s Jan 27 19:47:37.076: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:37.077: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07439332s Jan 27 19:47:37.077: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066421354s Jan 27 19:47:39.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067769258s Jan 27 19:47:39.041: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068193035s Jan 27 19:47:39.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068837251s Jan 27 19:47:39.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063377895s Jan 27 19:47:39.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.066: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065263068s Jan 27 19:47:39.067: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064026092s Jan 27 19:47:39.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.067: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065287389s Jan 27 19:47:39.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066307524s Jan 27 19:47:39.068: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:39.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065171377s Jan 27 19:47:39.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.036: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067781162s Jan 27 19:47:41.036: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067846044s Jan 27 19:47:41.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068166014s Jan 27 19:47:41.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069217668s Jan 27 19:47:41.043: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063079837s Jan 27 19:47:41.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.066: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065058065s Jan 27 19:47:41.066: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065384577s Jan 27 19:47:41.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064567433s Jan 27 19:47:41.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066709158s Jan 27 19:47:41.068: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:41.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065422618s Jan 27 19:47:41.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067359145s Jan 27 19:47:43.036: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.042: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068179408s Jan 27 19:47:43.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.042: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068654762s Jan 27 19:47:43.042: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.043: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069492054s Jan 27 19:47:43.043: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06354547s Jan 27 19:47:43.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.065: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064194424s Jan 27 19:47:43.065: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.066: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06459683s Jan 27 19:47:43.066: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.067: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0653699s Jan 27 19:47:43.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064468653s Jan 27 19:47:43.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:43.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0646913s Jan 27 19:47:43.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.066150542s Jan 27 19:47:45.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0687679s Jan 27 19:47:45.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.043: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069284066s Jan 27 19:47:45.043: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.043: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069553631s Jan 27 19:47:45.043: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.065: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.065992803s Jan 27 19:47:45.065: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.06570611s Jan 27 19:47:45.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.064540059s Jan 27 19:47:45.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067016586s Jan 27 19:47:45.068: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.068: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.065929385s Jan 27 19:47:45.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:45.069: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067400991s Jan 27 19:47:45.069: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.066951253s Jan 27 19:47:47.035: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.06794339s Jan 27 19:47:47.041: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.041: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.068092737s Jan 27 19:47:47.041: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.042: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.069033536s Jan 27 19:47:47.042: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.064067209s Jan 27 19:47:47.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.064958809s Jan 27 19:47:47.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.067: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.066192319s Jan 27 19:47:47.068: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.067250918s Jan 27 19:47:47.069: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.069: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.067342569s Jan 27 19:47:47.069: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:47.069: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.066537191s Jan 27 19:47:47.069: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.066804974s Jan 27 19:47:49.035: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067436426s Jan 27 19:47:49.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067975358s Jan 27 19:47:49.041: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.068604484s Jan 27 19:47:49.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064228127s Jan 27 19:47:49.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.06447139s Jan 27 19:47:49.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.065034412s Jan 27 19:47:49.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.068: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.066436232s Jan 27 19:47:49.068: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067256231s Jan 27 19:47:49.069: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:49.069: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.067580194s Jan 27 19:47:49.069: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.037: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.068697795s Jan 27 19:47:51.037: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.068171493s Jan 27 19:47:51.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.06855944s Jan 27 19:47:51.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.043: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.069417743s Jan 27 19:47:51.043: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.063862882s Jan 27 19:47:51.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.069: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.066483594s Jan 27 19:47:51.069: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.069: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.066738307s Jan 27 19:47:51.069: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.070: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.068493429s Jan 27 19:47:51.070: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.070: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.068598754s Jan 27 19:47:51.070: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:51.070: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.069179359s Jan 27 19:47:51.070: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.066081111s Jan 27 19:47:53.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.067763925s Jan 27 19:47:53.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.042: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.068229538s Jan 27 19:47:53.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.042: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.069022492s Jan 27 19:47:53.042: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.06339956s Jan 27 19:47:53.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.066: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.064937506s Jan 27 19:47:53.066: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.06550851s Jan 27 19:47:53.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.066348775s Jan 27 19:47:53.068: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.068: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.065255856s Jan 27 19:47:53.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:53.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.06590597s Jan 27 19:47:53.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.066980753s Jan 27 19:47:55.035: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.041: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6": Phase="Running", Reason="", readiness=true. Elapsed: 22.06814731s Jan 27 19:47:55.041: INFO: The phase of Pod statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6 is Running (Ready = true) Jan 27 19:47:55.041: INFO: Pod "statscollectiontest-6edb0ec3-e627-44fc-815c-cf82d38a5aed-6" satisfied condition "running and ready" Jan 27 19:47:55.042: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.068646628s Jan 27 19:47:55.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.043: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.069542359s Jan 27 19:47:55.043: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.063575466s Jan 27 19:47:55.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.067: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065345594s Jan 27 19:47:55.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.064402579s Jan 27 19:47:55.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065926931s Jan 27 19:47:55.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.066685732s Jan 27 19:47:55.068: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:55.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065465964s Jan 27 19:47:55.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.066066693s Jan 27 19:47:57.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.040: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.066545789s Jan 27 19:47:57.040: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.040: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.067041163s Jan 27 19:47:57.040: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.064376722s Jan 27 19:47:57.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.066: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065011906s Jan 27 19:47:57.066: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.066: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065265635s Jan 27 19:47:57.066: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.068: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065441001s Jan 27 19:47:57.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.068: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.066472911s Jan 27 19:47:57.068: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:57.068: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065706533s Jan 27 19:47:57.068: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.065807416s Jan 27 19:47:59.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.068042671s Jan 27 19:47:59.041: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.041: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.068211529s Jan 27 19:47:59.041: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.062: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.063485323s Jan 27 19:47:59.062: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.065: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.063755168s Jan 27 19:47:59.065: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.066: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.064626467s Jan 27 19:47:59.066: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.066: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 26.063619426s Jan 27 19:47:59.066: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.066: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.065203288s Jan 27 19:47:59.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:47:59.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 26.06452039s Jan 27 19:47:59.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.034: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.06585813s Jan 27 19:48:01.034: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.042: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4": Phase="Running", Reason="", readiness=true. Elapsed: 28.069077905s Jan 27 19:48:01.042: INFO: The phase of Pod statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4 is Running (Ready = true) Jan 27 19:48:01.042: INFO: Pod "statscollectiontest-820a1f50-3712-4c21-8649-0e82506d08e9-4" satisfied condition "running and ready" Jan 27 19:48:01.042: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.068978172s Jan 27 19:48:01.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.064452274s Jan 27 19:48:01.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.070: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.067606059s Jan 27 19:48:01.070: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.070: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.068993786s Jan 27 19:48:01.070: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.071: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.070000232s Jan 27 19:48:01.071: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.072: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.070380148s Jan 27 19:48:01.072: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:01.072: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.069506732s Jan 27 19:48:01.072: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5": Phase="Running", Reason="", readiness=true. Elapsed: 30.067030966s Jan 27 19:48:03.035: INFO: The phase of Pod statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5 is Running (Ready = true) Jan 27 19:48:03.035: INFO: Pod "statscollectiontest-2f4878a5-4f83-468c-b592-29e3e44edb61-5" satisfied condition "running and ready" Jan 27 19:48:03.041: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.067980654s Jan 27 19:48:03.041: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.063825765s Jan 27 19:48:03.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.066: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.064623953s Jan 27 19:48:03.066: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.066: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.065172357s Jan 27 19:48:03.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.069: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.068163522s Jan 27 19:48:03.069: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.070: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.067508307s Jan 27 19:48:03.070: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:03.071: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.068732765s Jan 27 19:48:03.071: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.042: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.069109756s Jan 27 19:48:05.042: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.063968541s Jan 27 19:48:05.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.066: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.064483751s Jan 27 19:48:05.066: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.067: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.065592889s Jan 27 19:48:05.067: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 32.064629847s Jan 27 19:48:05.067: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.066017175s Jan 27 19:48:05.067: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:05.068: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.065606389s Jan 27 19:48:05.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:07.043: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9": Phase="Running", Reason="", readiness=true. Elapsed: 34.069688293s Jan 27 19:48:07.043: INFO: The phase of Pod statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9 is Running (Ready = true) Jan 27 19:48:07.043: INFO: Pod "statscollectiontest-399acbe3-8323-40a4-8356-b291eb78f788-9" satisfied condition "running and ready" Jan 27 19:48:07.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 34.063947844s Jan 27 19:48:07.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:07.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 34.064909713s Jan 27 19:48:07.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:07.068: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Pending", Reason="", readiness=false. Elapsed: 34.06529987s Jan 27 19:48:07.068: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:07.068: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7": Phase="Running", Reason="", readiness=true. Elapsed: 34.067071257s Jan 27 19:48:07.068: INFO: The phase of Pod statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7 is Running (Ready = true) Jan 27 19:48:07.068: INFO: Pod "statscollectiontest-5f46613a-ed48-46a9-ac68-24cdacbe2b7e-7" satisfied condition "running and ready" Jan 27 19:48:07.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.067265378s Jan 27 19:48:07.069: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:07.069: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.067912856s Jan 27 19:48:07.069: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:09.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 36.064151905s Jan 27 19:48:09.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:09.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3": Phase="Running", Reason="", readiness=true. Elapsed: 36.064517918s Jan 27 19:48:09.067: INFO: The phase of Pod statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3 is Running (Ready = true) Jan 27 19:48:09.067: INFO: Pod "statscollectiontest-a7eb40fe-d176-4a87-87b2-d8fe5ebc0607-3" satisfied condition "running and ready" Jan 27 19:48:09.067: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.064283508s Jan 27 19:48:09.067: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:09.068: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8": Phase="Running", Reason="", readiness=true. Elapsed: 36.06646406s Jan 27 19:48:09.068: INFO: The phase of Pod statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8 is Running (Ready = true) Jan 27 19:48:09.068: INFO: Pod "statscollectiontest-e3f8476e-4f5d-4d4a-afa0-92f9bec5b025-8" satisfied condition "running and ready" Jan 27 19:48:09.068: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.0666714s Jan 27 19:48:09.068: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:11.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.063778227s Jan 27 19:48:11.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:11.065: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.063407021s Jan 27 19:48:11.065: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:11.066: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2": Phase="Running", Reason="", readiness=true. Elapsed: 38.063424176s Jan 27 19:48:11.066: INFO: The phase of Pod statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2 is Running (Ready = true) Jan 27 19:48:11.066: INFO: Pod "statscollectiontest-b183513b-0fbf-4aa6-8699-533a0680f8e6-2" satisfied condition "running and ready" Jan 27 19:48:13.067: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.06785965s Jan 27 19:48:13.067: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Pending, waiting for it to be Running (with Ready = true) Jan 27 19:48:13.067: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0": Phase="Running", Reason="", readiness=true. Elapsed: 40.066006328s Jan 27 19:48:13.067: INFO: The phase of Pod statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0 is Running (Ready = true) Jan 27 19:48:13.067: INFO: Pod "statscollectiontest-851014c5-3f88-48e7-bc35-bf87fe80a3ec-0" satisfied condition "running and ready" Jan 27 19:48:15.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1": Phase="Running", Reason="", readiness=true. Elapsed: 42.064429096s Jan 27 19:48:15.063: INFO: The phase of Pod statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1 is Running (Ready = true) Jan 27 19:48:15.063: INFO: Pod "statscollectiontest-cf46ca36-f8e3-4749-a0e8-64ecd520c7ae-1" satisfied condition "running and ready" �[1mSTEP:�[0m Waiting up to 3 minutes for pods to be running �[38;5;243m01/27/23 19:48:15.095�[0m Jan 27 19:48:15.095: INFO: Waiting up to 3m0s for all pods (need at least 10) in namespace 'kubelet-stats-test-windows-serial-8297' to be running and ready Jan 27 19:48:15.194: INFO: 10 / 10 pods in namespace 'kubelet-stats-test-windows-serial-8297' are running and ready (0 seconds elapsed) Jan 27 19:48:15.194: INFO: expected 0 pod replicas in namespace 'kubelet-stats-test-windows-serial-8297', 0 are Running and Ready. �[1mSTEP:�[0m Getting kubelet stats 5 times and checking average duration �[38;5;243m01/27/23 19:48:15.194�[0m Jan 27 19:48:40.909: INFO: Getting kubelet stats for node capz-conf-7xz7d took an average of 141 milliseconds over 5 iterations [AfterEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:187 Jan 27 19:48:40.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "kubelet-stats-test-windows-serial-8297" for this suite. �[38;5;243m01/27/23 19:48:40.945�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mPodTopologySpread Preemption�[0m �[1mvalidates proper pods are preempted�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:355�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:48:40.989�[0m Jan 27 19:48:40.989: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/27/23 19:48:40.99�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:48:41.091�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:48:41.152�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 27 19:48:41.317: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 19:49:41.572: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 �[1mSTEP:�[0m Trying to get 2 available nodes which can run pod �[38;5;243m01/27/23 19:49:41.603�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/27/23 19:49:41.603�[0m Jan 27 19:49:41.642: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-1928" to be "running" Jan 27 19:49:41.673: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 31.676687ms Jan 27 19:49:43.706: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064327539s Jan 27 19:49:45.706: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.064366259s Jan 27 19:49:45.706: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/27/23 19:49:45.738�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/27/23 19:49:45.778�[0m Jan 27 19:49:45.815: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-1928" to be "running" Jan 27 19:49:45.847: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 31.408047ms Jan 27 19:49:47.880: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064795821s Jan 27 19:49:49.879: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063475416s Jan 27 19:49:51.880: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.065032323s Jan 27 19:49:51.880: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/27/23 19:49:51.912�[0m �[1mSTEP:�[0m Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. �[38;5;243m01/27/23 19:49:51.955�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-7xz7d. �[38;5;243m01/27/23 19:49:52.033�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-d9r4r. �[38;5;243m01/27/23 19:49:52.15�[0m [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 �[1mSTEP:�[0m Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. �[38;5;243m01/27/23 19:49:52.193�[0m Jan 27 19:49:52.229: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-1928" to be "running" Jan 27 19:49:52.260: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 31.66106ms Jan 27 19:49:54.293: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064659074s Jan 27 19:49:56.293: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063820179s Jan 27 19:49:58.294: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065428561s Jan 27 19:50:00.294: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065037097s Jan 27 19:50:02.294: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 10.065184422s Jan 27 19:50:02.294: INFO: Pod "high" satisfied condition "running" Jan 27 19:50:02.369: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-1928" to be "running" Jan 27 19:50:02.401: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.64133ms Jan 27 19:50:04.434: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064550616s Jan 27 19:50:06.434: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.064537009s Jan 27 19:50:06.434: INFO: Pod "low-1" satisfied condition "running" Jan 27 19:50:06.504: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-1928" to be "running" Jan 27 19:50:06.537: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 32.001355ms Jan 27 19:50:08.569: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064192098s Jan 27 19:50:10.570: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 4.065200882s Jan 27 19:50:10.570: INFO: Pod "low-2" satisfied condition "running" Jan 27 19:50:10.638: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-1928" to be "running" Jan 27 19:50:10.675: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 36.546935ms Jan 27 19:50:12.707: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069028899s Jan 27 19:50:14.708: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069825928s Jan 27 19:50:16.707: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 6.069053314s Jan 27 19:50:16.707: INFO: Pod "low-3" satisfied condition "running" �[1mSTEP:�[0m Create 1 Medium Pod with TopologySpreadConstraints �[38;5;243m01/27/23 19:50:16.739�[0m Jan 27 19:50:16.781: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-1928" to be "running" Jan 27 19:50:16.816: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 34.736468ms Jan 27 19:50:18.850: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068736134s Jan 27 19:50:20.849: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067873431s Jan 27 19:50:22.848: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067109668s Jan 27 19:50:24.849: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067207589s Jan 27 19:50:26.849: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 10.067440306s Jan 27 19:50:26.849: INFO: Pod "medium" satisfied condition "running" �[1mSTEP:�[0m Verify there are 3 Pods left in this namespace �[38;5;243m01/27/23 19:50:26.882�[0m �[1mSTEP:�[0m Pod "high" is as expected to be running. �[38;5;243m01/27/23 19:50:26.918�[0m �[1mSTEP:�[0m Pod "low-1" is as expected to be running. �[38;5;243m01/27/23 19:50:26.918�[0m �[1mSTEP:�[0m Pod "medium" is as expected to be running. �[38;5;243m01/27/23 19:50:26.918�[0m [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-7xz7d �[38;5;243m01/27/23 19:50:26.919�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m01/27/23 19:50:26.991�[0m �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-d9r4r �[38;5;243m01/27/23 19:50:27.023�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m01/27/23 19:50:27.096�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 27 19:50:27.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-1928" for this suite. �[38;5;243m01/27/23 19:50:27.241�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","completed":10,"skipped":503,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [106.476 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m PodTopologySpread Preemption �[38;5;243mtest/e2e/scheduling/preemption.go:316�[0m validates proper pods are preempted �[38;5;243mtest/e2e/scheduling/preemption.go:355�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:48:40.989�[0m Jan 27 19:48:40.989: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/27/23 19:48:40.99�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:48:41.091�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:48:41.152�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 27 19:48:41.317: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 19:49:41.572: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 �[1mSTEP:�[0m Trying to get 2 available nodes which can run pod �[38;5;243m01/27/23 19:49:41.603�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/27/23 19:49:41.603�[0m Jan 27 19:49:41.642: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-1928" to be "running" Jan 27 19:49:41.673: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 31.676687ms Jan 27 19:49:43.706: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064327539s Jan 27 19:49:45.706: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.064366259s Jan 27 19:49:45.706: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/27/23 19:49:45.738�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/27/23 19:49:45.778�[0m Jan 27 19:49:45.815: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-1928" to be "running" Jan 27 19:49:45.847: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 31.408047ms Jan 27 19:49:47.880: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064795821s Jan 27 19:49:49.879: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063475416s Jan 27 19:49:51.880: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.065032323s Jan 27 19:49:51.880: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/27/23 19:49:51.912�[0m �[1mSTEP:�[0m Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. �[38;5;243m01/27/23 19:49:51.955�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-7xz7d. �[38;5;243m01/27/23 19:49:52.033�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-d9r4r. �[38;5;243m01/27/23 19:49:52.15�[0m [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 �[1mSTEP:�[0m Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. �[38;5;243m01/27/23 19:49:52.193�[0m Jan 27 19:49:52.229: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-1928" to be "running" Jan 27 19:49:52.260: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 31.66106ms Jan 27 19:49:54.293: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064659074s Jan 27 19:49:56.293: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063820179s Jan 27 19:49:58.294: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065428561s Jan 27 19:50:00.294: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065037097s Jan 27 19:50:02.294: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 10.065184422s Jan 27 19:50:02.294: INFO: Pod "high" satisfied condition "running" Jan 27 19:50:02.369: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-1928" to be "running" Jan 27 19:50:02.401: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.64133ms Jan 27 19:50:04.434: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064550616s Jan 27 19:50:06.434: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.064537009s Jan 27 19:50:06.434: INFO: Pod "low-1" satisfied condition "running" Jan 27 19:50:06.504: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-1928" to be "running" Jan 27 19:50:06.537: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 32.001355ms Jan 27 19:50:08.569: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064192098s Jan 27 19:50:10.570: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 4.065200882s Jan 27 19:50:10.570: INFO: Pod "low-2" satisfied condition "running" Jan 27 19:50:10.638: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-1928" to be "running" Jan 27 19:50:10.675: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 36.546935ms Jan 27 19:50:12.707: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069028899s Jan 27 19:50:14.708: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069825928s Jan 27 19:50:16.707: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 6.069053314s Jan 27 19:50:16.707: INFO: Pod "low-3" satisfied condition "running" �[1mSTEP:�[0m Create 1 Medium Pod with TopologySpreadConstraints �[38;5;243m01/27/23 19:50:16.739�[0m Jan 27 19:50:16.781: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-1928" to be "running" Jan 27 19:50:16.816: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 34.736468ms Jan 27 19:50:18.850: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068736134s Jan 27 19:50:20.849: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067873431s Jan 27 19:50:22.848: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067109668s Jan 27 19:50:24.849: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067207589s Jan 27 19:50:26.849: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 10.067440306s Jan 27 19:50:26.849: INFO: Pod "medium" satisfied condition "running" �[1mSTEP:�[0m Verify there are 3 Pods left in this namespace �[38;5;243m01/27/23 19:50:26.882�[0m �[1mSTEP:�[0m Pod "high" is as expected to be running. �[38;5;243m01/27/23 19:50:26.918�[0m �[1mSTEP:�[0m Pod "low-1" is as expected to be running. �[38;5;243m01/27/23 19:50:26.918�[0m �[1mSTEP:�[0m Pod "medium" is as expected to be running. �[38;5;243m01/27/23 19:50:26.918�[0m [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-7xz7d �[38;5;243m01/27/23 19:50:26.919�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m01/27/23 19:50:26.991�[0m �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-d9r4r �[38;5;243m01/27/23 19:50:27.023�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m01/27/23 19:50:27.096�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 27 19:50:27.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-1928" for this suite. �[38;5;243m01/27/23 19:50:27.241�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith long upscale stabilization window�[0m �[1mshould scale up only after the stabilization period�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:96�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:50:27.472�[0m Jan 27 19:50:27.472: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 19:50:27.473�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:50:27.58�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:50:27.642�[0m [It] should scale up only after the stabilization period test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:96 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/27/23 19:50:27.703�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 2 replicas �[38;5;243m01/27/23 19:50:27.703�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-874 �[38;5;243m01/27/23 19:50:27.751�[0m I0127 19:50:27.786577 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-874, replica count: 2 I0127 19:50:37.839331 13 runners.go:193] consumer Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 19:50:37.839�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-874 �[38;5;243m01/27/23 19:50:37.888�[0m I0127 19:50:37.924088 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-874, replica count: 1 I0127 19:50:47.974908 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 19:50:52.975: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 27 19:50:53.007: INFO: RC consumer: consume 220 millicores in total Jan 27 19:50:53.007: INFO: RC consumer: setting consumption to 220 millicores in total Jan 27 19:50:53.007: INFO: RC consumer: sending request to consume 220 millicores Jan 27 19:50:53.007: INFO: RC consumer: consume 0 MB in total Jan 27 19:50:53.007: INFO: RC consumer: consume custom metric 0 in total Jan 27 19:50:53.007: INFO: RC consumer: disabling consumption of custom metric QPS Jan 27 19:50:53.008: INFO: RC consumer: disabling mem consumption Jan 27 19:50:53.007: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } �[1mSTEP:�[0m triggering scale down to record a recommendation �[38;5;243m01/27/23 19:50:53.046�[0m Jan 27 19:50:53.046: INFO: RC consumer: consume 110 millicores in total Jan 27 19:50:53.067: INFO: RC consumer: setting consumption to 110 millicores in total Jan 27 19:50:53.103: INFO: waiting for 1 replicas (current: 2) Jan 27 19:51:13.137: INFO: waiting for 1 replicas (current: 2) Jan 27 19:51:23.068: INFO: RC consumer: sending request to consume 110 millicores Jan 27 19:51:23.068: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 19:51:33.138: INFO: waiting for 1 replicas (current: 2) Jan 27 19:51:53.108: INFO: RC consumer: sending request to consume 110 millicores Jan 27 19:51:53.108: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 19:51:53.137: INFO: waiting for 1 replicas (current: 2) Jan 27 19:52:13.140: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m01/27/23 19:52:13.14�[0m Jan 27 19:52:13.140: INFO: RC consumer: consume 330 millicores in total Jan 27 19:52:13.140: INFO: RC consumer: setting consumption to 330 millicores in total Jan 27 19:52:13.172: INFO: waiting for 3 replicas (current: 1) Jan 27 19:52:23.161: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:52:23.161: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:52:33.205: INFO: waiting for 3 replicas (current: 1) Jan 27 19:52:53.202: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:52:53.202: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:52:53.204: INFO: waiting for 3 replicas (current: 1) Jan 27 19:53:13.206: INFO: waiting for 3 replicas (current: 1) Jan 27 19:53:23.244: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:53:23.244: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:53:33.208: INFO: waiting for 3 replicas (current: 1) Jan 27 19:53:53.205: INFO: waiting for 3 replicas (current: 1) Jan 27 19:53:53.286: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:53:53.286: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:54:13.206: INFO: waiting for 3 replicas (current: 1) Jan 27 19:54:23.329: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:54:23.329: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:54:33.206: INFO: waiting for 3 replicas (current: 1) Jan 27 19:54:53.205: INFO: waiting for 3 replicas (current: 1) Jan 27 19:54:53.369: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:54:53.369: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:55:13.206: INFO: waiting for 3 replicas (current: 1) Jan 27 19:55:23.410: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:55:23.410: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:55:33.207: INFO: waiting for 3 replicas (current: 2) Jan 27 19:55:53.204: INFO: waiting for 3 replicas (current: 3) �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m01/27/23 19:55:53.205�[0m Jan 27 19:55:53.205: INFO: time waited for scale up: 3m40.064739151s �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/27/23 19:55:53.24�[0m Jan 27 19:55:53.240: INFO: RC consumer: stopping metric consumer Jan 27 19:55:53.240: INFO: RC consumer: stopping CPU consumer Jan 27 19:55:53.240: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-874, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:56:03.244�[0m Jan 27 19:56:03.361: INFO: Deleting Deployment.apps consumer took: 34.802013ms Jan 27 19:56:03.462: INFO: Terminating Deployment.apps consumer pods took: 100.303994ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-874, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:56:05.612�[0m Jan 27 19:56:05.735: INFO: Deleting ReplicationController consumer-ctrl took: 39.813091ms Jan 27 19:56:05.836: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.419305ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 27 19:56:07.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-874" for this suite. �[38;5;243m01/27/23 19:56:07.435�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period","completed":11,"skipped":626,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [340.001 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with long upscale stabilization window �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:95�[0m should scale up only after the stabilization period �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:96�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:50:27.472�[0m Jan 27 19:50:27.472: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 19:50:27.473�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:50:27.58�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:50:27.642�[0m [It] should scale up only after the stabilization period test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:96 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/27/23 19:50:27.703�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 2 replicas �[38;5;243m01/27/23 19:50:27.703�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-874 �[38;5;243m01/27/23 19:50:27.751�[0m I0127 19:50:27.786577 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-874, replica count: 2 I0127 19:50:37.839331 13 runners.go:193] consumer Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 19:50:37.839�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-874 �[38;5;243m01/27/23 19:50:37.888�[0m I0127 19:50:37.924088 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-874, replica count: 1 I0127 19:50:47.974908 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 19:50:52.975: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 27 19:50:53.007: INFO: RC consumer: consume 220 millicores in total Jan 27 19:50:53.007: INFO: RC consumer: setting consumption to 220 millicores in total Jan 27 19:50:53.007: INFO: RC consumer: sending request to consume 220 millicores Jan 27 19:50:53.007: INFO: RC consumer: consume 0 MB in total Jan 27 19:50:53.007: INFO: RC consumer: consume custom metric 0 in total Jan 27 19:50:53.007: INFO: RC consumer: disabling consumption of custom metric QPS Jan 27 19:50:53.008: INFO: RC consumer: disabling mem consumption Jan 27 19:50:53.007: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } �[1mSTEP:�[0m triggering scale down to record a recommendation �[38;5;243m01/27/23 19:50:53.046�[0m Jan 27 19:50:53.046: INFO: RC consumer: consume 110 millicores in total Jan 27 19:50:53.067: INFO: RC consumer: setting consumption to 110 millicores in total Jan 27 19:50:53.103: INFO: waiting for 1 replicas (current: 2) Jan 27 19:51:13.137: INFO: waiting for 1 replicas (current: 2) Jan 27 19:51:23.068: INFO: RC consumer: sending request to consume 110 millicores Jan 27 19:51:23.068: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 19:51:33.138: INFO: waiting for 1 replicas (current: 2) Jan 27 19:51:53.108: INFO: RC consumer: sending request to consume 110 millicores Jan 27 19:51:53.108: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 19:51:53.137: INFO: waiting for 1 replicas (current: 2) Jan 27 19:52:13.140: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m01/27/23 19:52:13.14�[0m Jan 27 19:52:13.140: INFO: RC consumer: consume 330 millicores in total Jan 27 19:52:13.140: INFO: RC consumer: setting consumption to 330 millicores in total Jan 27 19:52:13.172: INFO: waiting for 3 replicas (current: 1) Jan 27 19:52:23.161: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:52:23.161: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:52:33.205: INFO: waiting for 3 replicas (current: 1) Jan 27 19:52:53.202: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:52:53.202: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:52:53.204: INFO: waiting for 3 replicas (current: 1) Jan 27 19:53:13.206: INFO: waiting for 3 replicas (current: 1) Jan 27 19:53:23.244: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:53:23.244: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:53:33.208: INFO: waiting for 3 replicas (current: 1) Jan 27 19:53:53.205: INFO: waiting for 3 replicas (current: 1) Jan 27 19:53:53.286: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:53:53.286: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:54:13.206: INFO: waiting for 3 replicas (current: 1) Jan 27 19:54:23.329: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:54:23.329: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:54:33.206: INFO: waiting for 3 replicas (current: 1) Jan 27 19:54:53.205: INFO: waiting for 3 replicas (current: 1) Jan 27 19:54:53.369: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:54:53.369: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:55:13.206: INFO: waiting for 3 replicas (current: 1) Jan 27 19:55:23.410: INFO: RC consumer: sending request to consume 330 millicores Jan 27 19:55:23.410: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-874/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 27 19:55:33.207: INFO: waiting for 3 replicas (current: 2) Jan 27 19:55:53.204: INFO: waiting for 3 replicas (current: 3) �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m01/27/23 19:55:53.205�[0m Jan 27 19:55:53.205: INFO: time waited for scale up: 3m40.064739151s �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/27/23 19:55:53.24�[0m Jan 27 19:55:53.240: INFO: RC consumer: stopping metric consumer Jan 27 19:55:53.240: INFO: RC consumer: stopping CPU consumer Jan 27 19:55:53.240: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-874, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:56:03.244�[0m Jan 27 19:56:03.361: INFO: Deleting Deployment.apps consumer took: 34.802013ms Jan 27 19:56:03.462: INFO: Terminating Deployment.apps consumer pods took: 100.303994ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-874, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:56:05.612�[0m Jan 27 19:56:05.735: INFO: Deleting ReplicationController consumer-ctrl took: 39.813091ms Jan 27 19:56:05.836: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.419305ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 27 19:56:07.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-874" for this suite. �[38;5;243m01/27/23 19:56:07.435�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case)�[0m �[1mShould scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:98�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:56:07.48�[0m Jan 27 19:56:07.480: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 19:56:07.481�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:56:07.579�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:56:07.641�[0m [It] Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container test/e2e/autoscaling/horizontal_pod_autoscaling.go:98 �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m01/27/23 19:56:07.703�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-4546 �[38;5;243m01/27/23 19:56:07.75�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-4546 �[38;5;243m01/27/23 19:56:07.751�[0m I0127 19:56:07.785864 13 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-4546, replica count: 1 I0127 19:56:17.837038 13 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 19:56:17.837�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-4546 �[38;5;243m01/27/23 19:56:17.883�[0m I0127 19:56:17.920902 13 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-4546, replica count: 1 I0127 19:56:27.971883 13 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 19:56:32.972: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 27 19:56:33.005: INFO: RC rs: consume 125 millicores in total Jan 27 19:56:33.005: INFO: RC rs: setting consumption to 125 millicores in total Jan 27 19:56:33.005: INFO: RC rs: sending request to consume 125 millicores Jan 27 19:56:33.005: INFO: RC rs: consume 0 MB in total Jan 27 19:56:33.005: INFO: RC rs: consume custom metric 0 in total Jan 27 19:56:33.005: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4546/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=125&requestSizeMillicores=100 } Jan 27 19:56:33.005: INFO: RC rs: disabling mem consumption Jan 27 19:56:33.005: INFO: RC rs: disabling consumption of custom metric QPS Jan 27 19:56:33.078: INFO: waiting for 3 replicas (current: 1) Jan 27 19:56:53.113: INFO: waiting for 3 replicas (current: 1) Jan 27 19:57:03.077: INFO: RC rs: sending request to consume 125 millicores Jan 27 19:57:03.077: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4546/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=125&requestSizeMillicores=100 } Jan 27 19:57:13.111: INFO: waiting for 3 replicas (current: 3) Jan 27 19:57:13.111: INFO: RC rs: consume 500 millicores in total Jan 27 19:57:13.111: INFO: RC rs: setting consumption to 500 millicores in total Jan 27 19:57:13.143: INFO: waiting for 5 replicas (current: 3) Jan 27 19:57:33.127: INFO: RC rs: sending request to consume 500 millicores Jan 27 19:57:33.127: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4546/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=500&requestSizeMillicores=100 } Jan 27 19:57:33.175: INFO: waiting for 5 replicas (current: 3) Jan 27 19:57:53.174: INFO: waiting for 5 replicas (current: 3) Jan 27 19:58:03.168: INFO: RC rs: sending request to consume 500 millicores Jan 27 19:58:03.168: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4546/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=500&requestSizeMillicores=100 } Jan 27 19:58:13.175: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m01/27/23 19:58:13.211�[0m Jan 27 19:58:13.212: INFO: RC rs: stopping metric consumer Jan 27 19:58:13.212: INFO: RC rs: stopping CPU consumer Jan 27 19:58:13.212: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-4546, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:58:23.215�[0m Jan 27 19:58:23.333: INFO: Deleting ReplicaSet.apps rs took: 35.066602ms Jan 27 19:58:23.434: INFO: Terminating ReplicaSet.apps rs pods took: 100.990268ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-4546, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:58:26.097�[0m Jan 27 19:58:26.214: INFO: Deleting ReplicationController rs-ctrl took: 34.52699ms Jan 27 19:58:26.315: INFO: Terminating ReplicationController rs-ctrl pods took: 100.661175ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 19:58:27.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4546" for this suite. �[38;5;243m01/27/23 19:58:28.008�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container","completed":12,"skipped":744,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [140.563 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:96�[0m Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:98�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:56:07.48�[0m Jan 27 19:56:07.480: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 19:56:07.481�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:56:07.579�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:56:07.641�[0m [It] Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container test/e2e/autoscaling/horizontal_pod_autoscaling.go:98 �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m01/27/23 19:56:07.703�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-4546 �[38;5;243m01/27/23 19:56:07.75�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-4546 �[38;5;243m01/27/23 19:56:07.751�[0m I0127 19:56:07.785864 13 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-4546, replica count: 1 I0127 19:56:17.837038 13 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 19:56:17.837�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-4546 �[38;5;243m01/27/23 19:56:17.883�[0m I0127 19:56:17.920902 13 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-4546, replica count: 1 I0127 19:56:27.971883 13 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 19:56:32.972: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 27 19:56:33.005: INFO: RC rs: consume 125 millicores in total Jan 27 19:56:33.005: INFO: RC rs: setting consumption to 125 millicores in total Jan 27 19:56:33.005: INFO: RC rs: sending request to consume 125 millicores Jan 27 19:56:33.005: INFO: RC rs: consume 0 MB in total Jan 27 19:56:33.005: INFO: RC rs: consume custom metric 0 in total Jan 27 19:56:33.005: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4546/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=125&requestSizeMillicores=100 } Jan 27 19:56:33.005: INFO: RC rs: disabling mem consumption Jan 27 19:56:33.005: INFO: RC rs: disabling consumption of custom metric QPS Jan 27 19:56:33.078: INFO: waiting for 3 replicas (current: 1) Jan 27 19:56:53.113: INFO: waiting for 3 replicas (current: 1) Jan 27 19:57:03.077: INFO: RC rs: sending request to consume 125 millicores Jan 27 19:57:03.077: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4546/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=125&requestSizeMillicores=100 } Jan 27 19:57:13.111: INFO: waiting for 3 replicas (current: 3) Jan 27 19:57:13.111: INFO: RC rs: consume 500 millicores in total Jan 27 19:57:13.111: INFO: RC rs: setting consumption to 500 millicores in total Jan 27 19:57:13.143: INFO: waiting for 5 replicas (current: 3) Jan 27 19:57:33.127: INFO: RC rs: sending request to consume 500 millicores Jan 27 19:57:33.127: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4546/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=500&requestSizeMillicores=100 } Jan 27 19:57:33.175: INFO: waiting for 5 replicas (current: 3) Jan 27 19:57:53.174: INFO: waiting for 5 replicas (current: 3) Jan 27 19:58:03.168: INFO: RC rs: sending request to consume 500 millicores Jan 27 19:58:03.168: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4546/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=500&requestSizeMillicores=100 } Jan 27 19:58:13.175: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m01/27/23 19:58:13.211�[0m Jan 27 19:58:13.212: INFO: RC rs: stopping metric consumer Jan 27 19:58:13.212: INFO: RC rs: stopping CPU consumer Jan 27 19:58:13.212: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-4546, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:58:23.215�[0m Jan 27 19:58:23.333: INFO: Deleting ReplicaSet.apps rs took: 35.066602ms Jan 27 19:58:23.434: INFO: Terminating ReplicaSet.apps rs pods took: 100.990268ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-4546, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 19:58:26.097�[0m Jan 27 19:58:26.214: INFO: Deleting ReplicationController rs-ctrl took: 34.52699ms Jan 27 19:58:26.315: INFO: Terminating ReplicationController rs-ctrl pods took: 100.661175ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 19:58:27.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4546" for this suite. �[38;5;243m01/27/23 19:58:28.008�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that NodeSelector is respected if matching [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/predicates.go:461�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:58:28.058�[0m Jan 27 19:58:28.058: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m01/27/23 19:58:28.059�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:58:28.157�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:58:28.218�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 27 19:58:28.280: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 27 19:58:28.349: INFO: Waiting for terminating namespaces to be deleted... Jan 27 19:58:28.382: INFO: Logging pods the apiserver thinks is on node capz-conf-7xz7d before test Jan 27 19:58:28.421: INFO: calico-node-windows-dqk58 from calico-system started at 2023-01-27 19:18:25 +0000 UTC (2 container statuses recorded) Jan 27 19:58:28.421: INFO: Container calico-node-felix ready: true, restart count 1 Jan 27 19:58:28.421: INFO: Container calico-node-startup ready: true, restart count 0 Jan 27 19:58:28.421: INFO: containerd-logger-p8hqb from kube-system started at 2023-01-27 19:18:25 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.421: INFO: Container containerd-logger ready: true, restart count 0 Jan 27 19:58:28.421: INFO: csi-azuredisk-node-win-vgkvl from kube-system started at 2023-01-27 19:18:55 +0000 UTC (3 container statuses recorded) Jan 27 19:58:28.421: INFO: Container azuredisk ready: true, restart count 0 Jan 27 19:58:28.421: INFO: Container liveness-probe ready: true, restart count 0 Jan 27 19:58:28.421: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 27 19:58:28.421: INFO: csi-proxy-bkbqk from kube-system started at 2023-01-27 19:18:55 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.421: INFO: Container csi-proxy ready: true, restart count 0 Jan 27 19:58:28.421: INFO: kube-proxy-windows-t6bzr from kube-system started at 2023-01-27 19:18:25 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.421: INFO: Container kube-proxy ready: true, restart count 0 Jan 27 19:58:28.421: INFO: Logging pods the apiserver thinks is on node capz-conf-d9r4r before test Jan 27 19:58:28.464: INFO: calico-node-windows-v8qkl from calico-system started at 2023-01-27 19:18:17 +0000 UTC (2 container statuses recorded) Jan 27 19:58:28.464: INFO: Container calico-node-felix ready: true, restart count 0 Jan 27 19:58:28.464: INFO: Container calico-node-startup ready: true, restart count 0 Jan 27 19:58:28.464: INFO: containerd-logger-44hf4 from kube-system started at 2023-01-27 19:18:17 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.464: INFO: Container containerd-logger ready: true, restart count 0 Jan 27 19:58:28.464: INFO: csi-azuredisk-node-win-7gwtl from kube-system started at 2023-01-27 19:18:47 +0000 UTC (3 container statuses recorded) Jan 27 19:58:28.464: INFO: Container azuredisk ready: true, restart count 0 Jan 27 19:58:28.464: INFO: Container liveness-probe ready: true, restart count 0 Jan 27 19:58:28.464: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 27 19:58:28.464: INFO: csi-proxy-4r9lq from kube-system started at 2023-01-27 19:18:47 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.464: INFO: Container csi-proxy ready: true, restart count 0 Jan 27 19:58:28.464: INFO: kube-proxy-windows-685wt from kube-system started at 2023-01-27 19:18:17 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.464: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:461 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/27/23 19:58:28.464�[0m Jan 27 19:58:28.500: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1453" to be "running" Jan 27 19:58:28.532: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 31.656312ms Jan 27 19:58:30.564: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064034397s Jan 27 19:58:32.565: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.064511044s Jan 27 19:58:32.565: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/27/23 19:58:32.597�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m01/27/23 19:58:32.639�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-8ec010a3-5da1-4ced-9f92-db1b3ec6035e 42 �[38;5;243m01/27/23 19:58:32.679�[0m �[1mSTEP:�[0m Trying to relaunch the pod, now with labels. �[38;5;243m01/27/23 19:58:32.714�[0m Jan 27 19:58:32.750: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-1453" to be "not pending" Jan 27 19:58:32.782: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 31.65748ms Jan 27 19:58:34.815: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064570592s Jan 27 19:58:36.815: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064861518s Jan 27 19:58:38.815: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 6.065249103s Jan 27 19:58:38.815: INFO: Pod "with-labels" satisfied condition "not pending" �[1mSTEP:�[0m removing the label kubernetes.io/e2e-8ec010a3-5da1-4ced-9f92-db1b3ec6035e off the node capz-conf-d9r4r �[38;5;243m01/27/23 19:58:38.848�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-8ec010a3-5da1-4ced-9f92-db1b3ec6035e �[38;5;243m01/27/23 19:58:38.924�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Jan 27 19:58:38.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-pred-1453" for this suite. �[38;5;243m01/27/23 19:58:38.992�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","completed":13,"skipped":986,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [10.973 seconds]�[0m [sig-scheduling] SchedulerPredicates [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates that NodeSelector is respected if matching [Conformance] �[38;5;243mtest/e2e/scheduling/predicates.go:461�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:58:28.058�[0m Jan 27 19:58:28.058: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m01/27/23 19:58:28.059�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:58:28.157�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:58:28.218�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 27 19:58:28.280: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 27 19:58:28.349: INFO: Waiting for terminating namespaces to be deleted... Jan 27 19:58:28.382: INFO: Logging pods the apiserver thinks is on node capz-conf-7xz7d before test Jan 27 19:58:28.421: INFO: calico-node-windows-dqk58 from calico-system started at 2023-01-27 19:18:25 +0000 UTC (2 container statuses recorded) Jan 27 19:58:28.421: INFO: Container calico-node-felix ready: true, restart count 1 Jan 27 19:58:28.421: INFO: Container calico-node-startup ready: true, restart count 0 Jan 27 19:58:28.421: INFO: containerd-logger-p8hqb from kube-system started at 2023-01-27 19:18:25 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.421: INFO: Container containerd-logger ready: true, restart count 0 Jan 27 19:58:28.421: INFO: csi-azuredisk-node-win-vgkvl from kube-system started at 2023-01-27 19:18:55 +0000 UTC (3 container statuses recorded) Jan 27 19:58:28.421: INFO: Container azuredisk ready: true, restart count 0 Jan 27 19:58:28.421: INFO: Container liveness-probe ready: true, restart count 0 Jan 27 19:58:28.421: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 27 19:58:28.421: INFO: csi-proxy-bkbqk from kube-system started at 2023-01-27 19:18:55 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.421: INFO: Container csi-proxy ready: true, restart count 0 Jan 27 19:58:28.421: INFO: kube-proxy-windows-t6bzr from kube-system started at 2023-01-27 19:18:25 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.421: INFO: Container kube-proxy ready: true, restart count 0 Jan 27 19:58:28.421: INFO: Logging pods the apiserver thinks is on node capz-conf-d9r4r before test Jan 27 19:58:28.464: INFO: calico-node-windows-v8qkl from calico-system started at 2023-01-27 19:18:17 +0000 UTC (2 container statuses recorded) Jan 27 19:58:28.464: INFO: Container calico-node-felix ready: true, restart count 0 Jan 27 19:58:28.464: INFO: Container calico-node-startup ready: true, restart count 0 Jan 27 19:58:28.464: INFO: containerd-logger-44hf4 from kube-system started at 2023-01-27 19:18:17 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.464: INFO: Container containerd-logger ready: true, restart count 0 Jan 27 19:58:28.464: INFO: csi-azuredisk-node-win-7gwtl from kube-system started at 2023-01-27 19:18:47 +0000 UTC (3 container statuses recorded) Jan 27 19:58:28.464: INFO: Container azuredisk ready: true, restart count 0 Jan 27 19:58:28.464: INFO: Container liveness-probe ready: true, restart count 0 Jan 27 19:58:28.464: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 27 19:58:28.464: INFO: csi-proxy-4r9lq from kube-system started at 2023-01-27 19:18:47 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.464: INFO: Container csi-proxy ready: true, restart count 0 Jan 27 19:58:28.464: INFO: kube-proxy-windows-685wt from kube-system started at 2023-01-27 19:18:17 +0000 UTC (1 container statuses recorded) Jan 27 19:58:28.464: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:461 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/27/23 19:58:28.464�[0m Jan 27 19:58:28.500: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1453" to be "running" Jan 27 19:58:28.532: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 31.656312ms Jan 27 19:58:30.564: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064034397s Jan 27 19:58:32.565: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.064511044s Jan 27 19:58:32.565: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/27/23 19:58:32.597�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m01/27/23 19:58:32.639�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-8ec010a3-5da1-4ced-9f92-db1b3ec6035e 42 �[38;5;243m01/27/23 19:58:32.679�[0m �[1mSTEP:�[0m Trying to relaunch the pod, now with labels. �[38;5;243m01/27/23 19:58:32.714�[0m Jan 27 19:58:32.750: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-1453" to be "not pending" Jan 27 19:58:32.782: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 31.65748ms Jan 27 19:58:34.815: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064570592s Jan 27 19:58:36.815: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064861518s Jan 27 19:58:38.815: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 6.065249103s Jan 27 19:58:38.815: INFO: Pod "with-labels" satisfied condition "not pending" �[1mSTEP:�[0m removing the label kubernetes.io/e2e-8ec010a3-5da1-4ced-9f92-db1b3ec6035e off the node capz-conf-d9r4r �[38;5;243m01/27/23 19:58:38.848�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-8ec010a3-5da1-4ced-9f92-db1b3ec6035e �[38;5;243m01/27/23 19:58:38.924�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Jan 27 19:58:38.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-pred-1453" for this suite. �[38;5;243m01/27/23 19:58:38.992�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates lower priority pod preemption by critical pod [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:218�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:58:39.04�[0m Jan 27 19:58:39.040: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/27/23 19:58:39.041�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:58:39.14�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:58:39.203�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 27 19:58:39.369: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 19:59:39.621: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:218 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m01/27/23 19:59:39.653�[0m Jan 27 19:59:39.731: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 27 19:59:39.767: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 27 19:59:39.847: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 27 19:59:39.881: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m01/27/23 19:59:39.881�[0m Jan 27 19:59:39.881: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-7211" to be "running" Jan 27 19:59:39.918: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 36.898049ms Jan 27 19:59:41.952: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070795673s Jan 27 19:59:43.952: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07079101s Jan 27 19:59:45.952: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070204177s Jan 27 19:59:47.951: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069284049s Jan 27 19:59:49.952: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 10.070649632s Jan 27 19:59:49.952: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Jan 27 19:59:49.952: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-7211" to be "running" Jan 27 19:59:49.983: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 31.368484ms Jan 27 19:59:49.983: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Jan 27 19:59:49.984: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-7211" to be "running" Jan 27 19:59:50.018: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 34.877154ms Jan 27 19:59:52.051: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067062283s Jan 27 19:59:54.054: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.07025081s Jan 27 19:59:54.054: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Jan 27 19:59:54.054: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-7211" to be "running" Jan 27 19:59:54.086: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 31.76379ms Jan 27 19:59:54.086: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a critical pod that use same resources as that of a lower priority pod �[38;5;243m01/27/23 19:59:54.086�[0m Jan 27 19:59:54.125: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Jan 27 19:59:54.156: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 31.418856ms Jan 27 19:59:56.190: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064488451s Jan 27 19:59:58.191: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066345139s Jan 27 20:00:00.189: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.06442421s Jan 27 20:00:00.190: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 27 20:00:00.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-7211" for this suite. �[38;5;243m01/27/23 20:00:00.456�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","completed":14,"skipped":1097,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [81.630 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates lower priority pod preemption by critical pod [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:218�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 19:58:39.04�[0m Jan 27 19:58:39.040: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/27/23 19:58:39.041�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 19:58:39.14�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 19:58:39.203�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 27 19:58:39.369: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 19:59:39.621: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:218 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m01/27/23 19:59:39.653�[0m Jan 27 19:59:39.731: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 27 19:59:39.767: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 27 19:59:39.847: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 27 19:59:39.881: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m01/27/23 19:59:39.881�[0m Jan 27 19:59:39.881: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-7211" to be "running" Jan 27 19:59:39.918: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 36.898049ms Jan 27 19:59:41.952: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070795673s Jan 27 19:59:43.952: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07079101s Jan 27 19:59:45.952: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070204177s Jan 27 19:59:47.951: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069284049s Jan 27 19:59:49.952: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 10.070649632s Jan 27 19:59:49.952: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Jan 27 19:59:49.952: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-7211" to be "running" Jan 27 19:59:49.983: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 31.368484ms Jan 27 19:59:49.983: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Jan 27 19:59:49.984: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-7211" to be "running" Jan 27 19:59:50.018: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 34.877154ms Jan 27 19:59:52.051: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067062283s Jan 27 19:59:54.054: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.07025081s Jan 27 19:59:54.054: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Jan 27 19:59:54.054: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-7211" to be "running" Jan 27 19:59:54.086: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 31.76379ms Jan 27 19:59:54.086: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a critical pod that use same resources as that of a lower priority pod �[38;5;243m01/27/23 19:59:54.086�[0m Jan 27 19:59:54.125: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Jan 27 19:59:54.156: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 31.418856ms Jan 27 19:59:56.190: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064488451s Jan 27 19:59:58.191: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066345139s Jan 27 20:00:00.189: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.06442421s Jan 27 20:00:00.190: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 27 20:00:00.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-7211" for this suite. �[38;5;243m01/27/23 20:00:00.456�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould support orphan deletion of custom resources�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:1040�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:00:00.684�[0m Jan 27 20:00:00.684: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:00:00.685�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:00:00.784�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:00:00.845�[0m [It] should support orphan deletion of custom resources test/e2e/apimachinery/garbage_collector.go:1040 Jan 27 20:00:00.907: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 20:00:03.160: INFO: created owner resource "ownerqvvrv" Jan 27 20:00:03.195: INFO: created dependent resource "dependentl48z8" �[1mSTEP:�[0m wait for the owner to be deleted �[38;5;243m01/27/23 20:00:03.23�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd �[38;5;243m01/27/23 20:00:13.264�[0m [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:00:43.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-6385" for this suite. �[38;5;243m01/27/23 20:00:43.963�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","completed":15,"skipped":1345,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [43.317 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should support orphan deletion of custom resources �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:1040�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:00:00.684�[0m Jan 27 20:00:00.684: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:00:00.685�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:00:00.784�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:00:00.845�[0m [It] should support orphan deletion of custom resources test/e2e/apimachinery/garbage_collector.go:1040 Jan 27 20:00:00.907: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 20:00:03.160: INFO: created owner resource "ownerqvvrv" Jan 27 20:00:03.195: INFO: created dependent resource "dependentl48z8" �[1mSTEP:�[0m wait for the owner to be deleted �[38;5;243m01/27/23 20:00:03.23�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd �[38;5;243m01/27/23 20:00:13.264�[0m [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:00:43.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-6385" for this suite. �[38;5;243m01/27/23 20:00:43.963�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mReplicationController light�[0m �[1mShould scale from 2 pods to 1 pod [Slow]�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:82�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:00:44.002�[0m Jan 27 20:00:44.002: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:00:44.004�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:00:44.101�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:00:44.163�[0m [It] Should scale from 2 pods to 1 pod [Slow] test/e2e/autoscaling/horizontal_pod_autoscaling.go:82 �[1mSTEP:�[0m Running consuming RC rc-light via /v1, Kind=ReplicationController with 2 replicas �[38;5;243m01/27/23 20:00:44.224�[0m �[1mSTEP:�[0m creating replication controller rc-light in namespace horizontal-pod-autoscaling-1423 �[38;5;243m01/27/23 20:00:44.268�[0m I0127 20:00:44.306019 13 runners.go:193] Created replication controller with name: rc-light, namespace: horizontal-pod-autoscaling-1423, replica count: 2 I0127 20:00:54.361432 13 runners.go:193] rc-light Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:00:54.361�[0m �[1mSTEP:�[0m creating replication controller rc-light-ctrl in namespace horizontal-pod-autoscaling-1423 �[38;5;243m01/27/23 20:00:54.41�[0m I0127 20:00:54.448112 13 runners.go:193] Created replication controller with name: rc-light-ctrl, namespace: horizontal-pod-autoscaling-1423, replica count: 1 I0127 20:01:04.500850 13 runners.go:193] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:01:09.501: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1 Jan 27 20:01:09.533: INFO: RC rc-light: consume 50 millicores in total Jan 27 20:01:09.533: INFO: RC rc-light: setting consumption to 50 millicores in total Jan 27 20:01:09.533: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:01:09.533: INFO: RC rc-light: consume 0 MB in total Jan 27 20:01:09.533: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:01:09.534: INFO: RC rc-light: disabling mem consumption Jan 27 20:01:09.534: INFO: RC rc-light: consume custom metric 0 in total Jan 27 20:01:09.534: INFO: RC rc-light: disabling consumption of custom metric QPS Jan 27 20:01:09.600: INFO: waiting for 1 replicas (current: 2) Jan 27 20:01:29.635: INFO: waiting for 1 replicas (current: 2) Jan 27 20:01:39.603: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:01:39.604: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:01:49.635: INFO: waiting for 1 replicas (current: 2) Jan 27 20:02:09.633: INFO: waiting for 1 replicas (current: 2) Jan 27 20:02:09.646: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:02:09.647: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:02:29.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:02:39.688: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:02:39.688: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:02:49.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:03:09.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:03:09.741: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:03:09.741: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:03:29.633: INFO: waiting for 1 replicas (current: 2) Jan 27 20:03:39.786: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:03:39.786: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:03:49.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:04:09.635: INFO: waiting for 1 replicas (current: 2) Jan 27 20:04:09.835: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:04:09.835: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:04:29.636: INFO: waiting for 1 replicas (current: 2) Jan 27 20:04:39.885: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:04:39.885: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:04:49.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:05:09.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:05:09.928: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:05:09.929: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:05:29.633: INFO: waiting for 1 replicas (current: 2) Jan 27 20:05:39.975: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:05:39.975: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:05:49.633: INFO: waiting for 1 replicas (current: 2) Jan 27 20:06:09.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:06:10.019: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:06:10.019: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:06:29.638: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc-light �[38;5;243m01/27/23 20:06:29.674�[0m Jan 27 20:06:29.674: INFO: RC rc-light: stopping metric consumer Jan 27 20:06:29.674: INFO: RC rc-light: stopping mem consumer Jan 27 20:06:29.674: INFO: RC rc-light: stopping CPU consumer �[1mSTEP:�[0m deleting ReplicationController rc-light in namespace horizontal-pod-autoscaling-1423, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:06:39.675�[0m Jan 27 20:06:39.796: INFO: Deleting ReplicationController rc-light took: 35.748561ms Jan 27 20:06:39.896: INFO: Terminating ReplicationController rc-light pods took: 100.840742ms �[1mSTEP:�[0m deleting ReplicationController rc-light-ctrl in namespace horizontal-pod-autoscaling-1423, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:06:41.261�[0m Jan 27 20:06:41.381: INFO: Deleting ReplicationController rc-light-ctrl took: 35.505053ms Jan 27 20:06:41.482: INFO: Terminating ReplicationController rc-light-ctrl pods took: 100.937984ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 20:06:43.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1423" for this suite. �[38;5;243m01/27/23 20:06:43.079�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]","completed":16,"skipped":1369,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [359.113 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m ReplicationController light �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:69�[0m Should scale from 2 pods to 1 pod [Slow] �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:82�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:00:44.002�[0m Jan 27 20:00:44.002: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:00:44.004�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:00:44.101�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:00:44.163�[0m [It] Should scale from 2 pods to 1 pod [Slow] test/e2e/autoscaling/horizontal_pod_autoscaling.go:82 �[1mSTEP:�[0m Running consuming RC rc-light via /v1, Kind=ReplicationController with 2 replicas �[38;5;243m01/27/23 20:00:44.224�[0m �[1mSTEP:�[0m creating replication controller rc-light in namespace horizontal-pod-autoscaling-1423 �[38;5;243m01/27/23 20:00:44.268�[0m I0127 20:00:44.306019 13 runners.go:193] Created replication controller with name: rc-light, namespace: horizontal-pod-autoscaling-1423, replica count: 2 I0127 20:00:54.361432 13 runners.go:193] rc-light Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:00:54.361�[0m �[1mSTEP:�[0m creating replication controller rc-light-ctrl in namespace horizontal-pod-autoscaling-1423 �[38;5;243m01/27/23 20:00:54.41�[0m I0127 20:00:54.448112 13 runners.go:193] Created replication controller with name: rc-light-ctrl, namespace: horizontal-pod-autoscaling-1423, replica count: 1 I0127 20:01:04.500850 13 runners.go:193] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:01:09.501: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1 Jan 27 20:01:09.533: INFO: RC rc-light: consume 50 millicores in total Jan 27 20:01:09.533: INFO: RC rc-light: setting consumption to 50 millicores in total Jan 27 20:01:09.533: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:01:09.533: INFO: RC rc-light: consume 0 MB in total Jan 27 20:01:09.533: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:01:09.534: INFO: RC rc-light: disabling mem consumption Jan 27 20:01:09.534: INFO: RC rc-light: consume custom metric 0 in total Jan 27 20:01:09.534: INFO: RC rc-light: disabling consumption of custom metric QPS Jan 27 20:01:09.600: INFO: waiting for 1 replicas (current: 2) Jan 27 20:01:29.635: INFO: waiting for 1 replicas (current: 2) Jan 27 20:01:39.603: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:01:39.604: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:01:49.635: INFO: waiting for 1 replicas (current: 2) Jan 27 20:02:09.633: INFO: waiting for 1 replicas (current: 2) Jan 27 20:02:09.646: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:02:09.647: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:02:29.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:02:39.688: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:02:39.688: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:02:49.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:03:09.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:03:09.741: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:03:09.741: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:03:29.633: INFO: waiting for 1 replicas (current: 2) Jan 27 20:03:39.786: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:03:39.786: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:03:49.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:04:09.635: INFO: waiting for 1 replicas (current: 2) Jan 27 20:04:09.835: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:04:09.835: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:04:29.636: INFO: waiting for 1 replicas (current: 2) Jan 27 20:04:39.885: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:04:39.885: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:04:49.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:05:09.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:05:09.928: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:05:09.929: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:05:29.633: INFO: waiting for 1 replicas (current: 2) Jan 27 20:05:39.975: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:05:39.975: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:05:49.633: INFO: waiting for 1 replicas (current: 2) Jan 27 20:06:09.634: INFO: waiting for 1 replicas (current: 2) Jan 27 20:06:10.019: INFO: RC rc-light: sending request to consume 50 millicores Jan 27 20:06:10.019: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1423/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 27 20:06:29.638: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc-light �[38;5;243m01/27/23 20:06:29.674�[0m Jan 27 20:06:29.674: INFO: RC rc-light: stopping metric consumer Jan 27 20:06:29.674: INFO: RC rc-light: stopping mem consumer Jan 27 20:06:29.674: INFO: RC rc-light: stopping CPU consumer �[1mSTEP:�[0m deleting ReplicationController rc-light in namespace horizontal-pod-autoscaling-1423, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:06:39.675�[0m Jan 27 20:06:39.796: INFO: Deleting ReplicationController rc-light took: 35.748561ms Jan 27 20:06:39.896: INFO: Terminating ReplicationController rc-light pods took: 100.840742ms �[1mSTEP:�[0m deleting ReplicationController rc-light-ctrl in namespace horizontal-pod-autoscaling-1423, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:06:41.261�[0m Jan 27 20:06:41.381: INFO: Deleting ReplicationController rc-light-ctrl took: 35.505053ms Jan 27 20:06:41.482: INFO: Terminating ReplicationController rc-light-ctrl pods took: 100.937984ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 20:06:43.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1423" for this suite. �[38;5;243m01/27/23 20:06:43.079�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould ensure that all services are removed when a namespace is deleted [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/namespace.go:250�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:06:43.119�[0m Jan 27 20:06:43.119: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/27/23 20:06:43.12�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:06:43.222�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:06:43.283�[0m [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:250 �[1mSTEP:�[0m Creating a test namespace �[38;5;243m01/27/23 20:06:43.346�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:06:43.448�[0m �[1mSTEP:�[0m Creating a service in the namespace �[38;5;243m01/27/23 20:06:43.51�[0m �[1mSTEP:�[0m Deleting the namespace �[38;5;243m01/27/23 20:06:43.552�[0m �[1mSTEP:�[0m Waiting for the namespace to be removed. �[38;5;243m01/27/23 20:06:43.593�[0m �[1mSTEP:�[0m Recreating the namespace �[38;5;243m01/27/23 20:06:49.625�[0m �[1mSTEP:�[0m Verifying there is no service in the namespace �[38;5;243m01/27/23 20:06:49.725�[0m [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 27 20:06:49.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-8082" for this suite. �[38;5;243m01/27/23 20:06:49.794�[0m �[1mSTEP:�[0m Destroying namespace "nsdeletetest-132" for this suite. �[38;5;243m01/27/23 20:06:49.829�[0m Jan 27 20:06:49.863: INFO: Namespace nsdeletetest-132 was already deleted �[1mSTEP:�[0m Destroying namespace "nsdeletetest-8736" for this suite. �[38;5;243m01/27/23 20:06:49.863�[0m {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","completed":17,"skipped":1391,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [6.778 seconds]�[0m [sig-api-machinery] Namespaces [Serial] �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should ensure that all services are removed when a namespace is deleted [Conformance] �[38;5;243mtest/e2e/apimachinery/namespace.go:250�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:06:43.119�[0m Jan 27 20:06:43.119: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/27/23 20:06:43.12�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:06:43.222�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:06:43.283�[0m [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:250 �[1mSTEP:�[0m Creating a test namespace �[38;5;243m01/27/23 20:06:43.346�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:06:43.448�[0m �[1mSTEP:�[0m Creating a service in the namespace �[38;5;243m01/27/23 20:06:43.51�[0m �[1mSTEP:�[0m Deleting the namespace �[38;5;243m01/27/23 20:06:43.552�[0m �[1mSTEP:�[0m Waiting for the namespace to be removed. �[38;5;243m01/27/23 20:06:43.593�[0m �[1mSTEP:�[0m Recreating the namespace �[38;5;243m01/27/23 20:06:49.625�[0m �[1mSTEP:�[0m Verifying there is no service in the namespace �[38;5;243m01/27/23 20:06:49.725�[0m [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 27 20:06:49.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-8082" for this suite. �[38;5;243m01/27/23 20:06:49.794�[0m �[1mSTEP:�[0m Destroying namespace "nsdeletetest-132" for this suite. �[38;5;243m01/27/23 20:06:49.829�[0m Jan 27 20:06:49.863: INFO: Namespace nsdeletetest-132 was already deleted �[1mSTEP:�[0m Destroying namespace "nsdeletetest-8736" for this suite. �[38;5;243m01/27/23 20:06:49.863�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould succeed in writing subpaths in container [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/common/node/expansion.go:296�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:06:49.908�[0m Jan 27 20:06:49.908: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/27/23 20:06:49.909�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:06:50.008�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:06:50.076�[0m [It] should succeed in writing subpaths in container [Slow] [Conformance] test/e2e/common/node/expansion.go:296 �[1mSTEP:�[0m creating the pod �[38;5;243m01/27/23 20:06:50.138�[0m �[1mSTEP:�[0m waiting for pod running �[38;5;243m01/27/23 20:06:50.176�[0m Jan 27 20:06:50.176: INFO: Waiting up to 2m0s for pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" in namespace "var-expansion-215" to be "running" Jan 27 20:06:50.207: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.454242ms Jan 27 20:06:52.240: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064419632s Jan 27 20:06:54.239: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063567606s Jan 27 20:06:56.241: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064946107s Jan 27 20:06:58.240: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064226653s Jan 27 20:07:00.241: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064952771s Jan 27 20:07:02.240: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063958609s Jan 27 20:07:04.239: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.063574363s Jan 27 20:07:06.240: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Running", Reason="", readiness=true. Elapsed: 16.063698876s Jan 27 20:07:06.240: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" satisfied condition "running" �[1mSTEP:�[0m creating a file in subpath �[38;5;243m01/27/23 20:07:06.24�[0m Jan 27 20:07:06.272: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-215 PodName:var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 27 20:07:06.272: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 20:07:06.273: INFO: ExecWithOptions: Clientset creation Jan 27 20:07:06.274: INFO: ExecWithOptions: execute(POST https://capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-215/pods/var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP:�[0m test for file in mounted path �[38;5;243m01/27/23 20:07:06.76�[0m Jan 27 20:07:06.792: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-215 PodName:var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 27 20:07:06.792: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 20:07:06.793: INFO: ExecWithOptions: Clientset creation Jan 27 20:07:06.793: INFO: ExecWithOptions: execute(POST https://capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-215/pods/var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP:�[0m updating the annotation value �[38;5;243m01/27/23 20:07:07.117�[0m Jan 27 20:07:07.689: INFO: Successfully updated pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" �[1mSTEP:�[0m waiting for annotated pod running �[38;5;243m01/27/23 20:07:07.689�[0m Jan 27 20:07:07.689: INFO: Waiting up to 2m0s for pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" in namespace "var-expansion-215" to be "running" Jan 27 20:07:07.722: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Running", Reason="", readiness=true. Elapsed: 32.634162ms Jan 27 20:07:07.722: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" satisfied condition "running" �[1mSTEP:�[0m deleting the pod gracefully �[38;5;243m01/27/23 20:07:07.722�[0m Jan 27 20:07:07.722: INFO: Deleting pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" in namespace "var-expansion-215" Jan 27 20:07:07.759: INFO: Wait up to 5m0s for pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 27 20:07:11.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-215" for this suite. �[38;5;243m01/27/23 20:07:11.859�[0m {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","completed":18,"skipped":1602,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [21.989 seconds]�[0m [sig-node] Variable Expansion �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should succeed in writing subpaths in container [Slow] [Conformance] �[38;5;243mtest/e2e/common/node/expansion.go:296�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:06:49.908�[0m Jan 27 20:06:49.908: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/27/23 20:06:49.909�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:06:50.008�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:06:50.076�[0m [It] should succeed in writing subpaths in container [Slow] [Conformance] test/e2e/common/node/expansion.go:296 �[1mSTEP:�[0m creating the pod �[38;5;243m01/27/23 20:06:50.138�[0m �[1mSTEP:�[0m waiting for pod running �[38;5;243m01/27/23 20:06:50.176�[0m Jan 27 20:06:50.176: INFO: Waiting up to 2m0s for pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" in namespace "var-expansion-215" to be "running" Jan 27 20:06:50.207: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.454242ms Jan 27 20:06:52.240: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064419632s Jan 27 20:06:54.239: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063567606s Jan 27 20:06:56.241: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064946107s Jan 27 20:06:58.240: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064226653s Jan 27 20:07:00.241: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064952771s Jan 27 20:07:02.240: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063958609s Jan 27 20:07:04.239: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.063574363s Jan 27 20:07:06.240: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Running", Reason="", readiness=true. Elapsed: 16.063698876s Jan 27 20:07:06.240: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" satisfied condition "running" �[1mSTEP:�[0m creating a file in subpath �[38;5;243m01/27/23 20:07:06.24�[0m Jan 27 20:07:06.272: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-215 PodName:var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 27 20:07:06.272: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 20:07:06.273: INFO: ExecWithOptions: Clientset creation Jan 27 20:07:06.274: INFO: ExecWithOptions: execute(POST https://capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-215/pods/var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP:�[0m test for file in mounted path �[38;5;243m01/27/23 20:07:06.76�[0m Jan 27 20:07:06.792: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-215 PodName:var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 27 20:07:06.792: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 20:07:06.793: INFO: ExecWithOptions: Clientset creation Jan 27 20:07:06.793: INFO: ExecWithOptions: execute(POST https://capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-215/pods/var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP:�[0m updating the annotation value �[38;5;243m01/27/23 20:07:07.117�[0m Jan 27 20:07:07.689: INFO: Successfully updated pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" �[1mSTEP:�[0m waiting for annotated pod running �[38;5;243m01/27/23 20:07:07.689�[0m Jan 27 20:07:07.689: INFO: Waiting up to 2m0s for pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" in namespace "var-expansion-215" to be "running" Jan 27 20:07:07.722: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5": Phase="Running", Reason="", readiness=true. Elapsed: 32.634162ms Jan 27 20:07:07.722: INFO: Pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" satisfied condition "running" �[1mSTEP:�[0m deleting the pod gracefully �[38;5;243m01/27/23 20:07:07.722�[0m Jan 27 20:07:07.722: INFO: Deleting pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" in namespace "var-expansion-215" Jan 27 20:07:07.759: INFO: Wait up to 5m0s for pod "var-expansion-513a7981-ed00-4b1e-925a-df51a7c584a5" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 27 20:07:11.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-215" for this suite. �[38;5;243m01/27/23 20:07:11.859�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould not be blocked by dependency circle [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:849�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:07:11.918�[0m Jan 27 20:07:11.918: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:07:11.92�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:07:12.02�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:07:12.081�[0m [It] should not be blocked by dependency circle [Conformance] test/e2e/apimachinery/garbage_collector.go:849 Jan 27 20:07:12.288: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c4fb3426-f7e6-4e3e-b133-97f3dd951813", Controller:(*bool)(0xc00289c536), BlockOwnerDeletion:(*bool)(0xc00289c537)}} Jan 27 20:07:12.326: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e0a4cbfb-a1e2-4ee8-9201-5312aa28a974", Controller:(*bool)(0xc00289c7c6), BlockOwnerDeletion:(*bool)(0xc00289c7c7)}} Jan 27 20:07:12.367: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e540c042-6655-419d-a3fc-455563ce7062", Controller:(*bool)(0xc001396e36), BlockOwnerDeletion:(*bool)(0xc001396e37)}} [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:07:17.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-9420" for this suite. �[38;5;243m01/27/23 20:07:17.47�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","completed":19,"skipped":1809,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [5.591 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should not be blocked by dependency circle [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:849�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:07:11.918�[0m Jan 27 20:07:11.918: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:07:11.92�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:07:12.02�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:07:12.081�[0m [It] should not be blocked by dependency circle [Conformance] test/e2e/apimachinery/garbage_collector.go:849 Jan 27 20:07:12.288: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c4fb3426-f7e6-4e3e-b133-97f3dd951813", Controller:(*bool)(0xc00289c536), BlockOwnerDeletion:(*bool)(0xc00289c537)}} Jan 27 20:07:12.326: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e0a4cbfb-a1e2-4ee8-9201-5312aa28a974", Controller:(*bool)(0xc00289c7c6), BlockOwnerDeletion:(*bool)(0xc00289c7c7)}} Jan 27 20:07:12.367: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e540c042-6655-419d-a3fc-455563ce7062", Controller:(*bool)(0xc001396e36), BlockOwnerDeletion:(*bool)(0xc001396e37)}} [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:07:17.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-9420" for this suite. �[38;5;243m01/27/23 20:07:17.47�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Density [Serial] [Slow] �[38;5;243mcreate a batch of pods�[0m �[1mlatency/resource should be within limit when create 10 pods with 0s interval�[0m �[38;5;243mtest/e2e/windows/density.go:68�[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:07:17.512�[0m Jan 27 20:07:17.512: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename density-test-windows �[38;5;243m01/27/23 20:07:17.514�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:07:17.615�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:07:17.677�[0m [It] latency/resource should be within limit when create 10 pods with 0s interval test/e2e/windows/density.go:68 �[1mSTEP:�[0m Creating a batch of pods �[38;5;243m01/27/23 20:07:17.741�[0m �[1mSTEP:�[0m Waiting for all Pods to be observed by the watch... �[38;5;243m01/27/23 20:07:17.741�[0m Jan 27 20:07:27.781: INFO: Waiting for pod test-ca143f68-6aa8-41be-a317-aac4f742bc41 to disappear Jan 27 20:07:27.792: INFO: Waiting for pod test-e59cb9ff-100b-4cbe-a2ad-ed9e29c5e697 to disappear Jan 27 20:07:27.792: INFO: Waiting for pod test-8f00a0cc-3eb3-4e19-bc6d-ef534d417420 to disappear Jan 27 20:07:27.793: INFO: Waiting for pod test-2a55bf15-e81d-4406-b5d2-4b52e642095d to disappear Jan 27 20:07:27.793: INFO: Waiting for pod test-0dc44719-a824-41da-82a6-bceb9102bdf8 to disappear Jan 27 20:07:27.817: INFO: Pod test-ca143f68-6aa8-41be-a317-aac4f742bc41 still exists Jan 27 20:07:27.817: INFO: Waiting for pod test-51b62a2a-8690-421a-8bca-aba20d620869 to disappear Jan 27 20:07:27.820: INFO: Waiting for pod test-bd46cb0f-56c0-4eca-9a6e-837e682c3223 to disappear Jan 27 20:07:27.880: INFO: Waiting for pod test-5ea3740c-6e8f-40ed-9ce6-89309eaeca11 to disappear Jan 27 20:07:27.880: INFO: Waiting for pod test-96cdffcd-7b01-42f4-b302-cf9278b28fe3 to disappear Jan 27 20:07:27.909: INFO: Waiting for pod test-5e8aa231-5288-4667-ae2a-4ae90b55a6e5 to disappear Jan 27 20:07:28.060: INFO: Pod test-e59cb9ff-100b-4cbe-a2ad-ed9e29c5e697 still exists Jan 27 20:07:28.061: INFO: Pod test-8f00a0cc-3eb3-4e19-bc6d-ef534d417420 still exists Jan 27 20:07:28.065: INFO: Pod test-2a55bf15-e81d-4406-b5d2-4b52e642095d still exists Jan 27 20:07:28.069: INFO: Pod test-0dc44719-a824-41da-82a6-bceb9102bdf8 still exists Jan 27 20:07:28.072: INFO: Pod test-51b62a2a-8690-421a-8bca-aba20d620869 still exists Jan 27 20:07:28.076: INFO: Pod test-bd46cb0f-56c0-4eca-9a6e-837e682c3223 still exists Jan 27 20:07:28.079: INFO: Pod test-5ea3740c-6e8f-40ed-9ce6-89309eaeca11 still exists Jan 27 20:07:28.083: INFO: Pod test-96cdffcd-7b01-42f4-b302-cf9278b28fe3 still exists Jan 27 20:07:28.086: INFO: Pod test-5e8aa231-5288-4667-ae2a-4ae90b55a6e5 still exists Jan 27 20:07:57.820: INFO: Waiting for pod test-ca143f68-6aa8-41be-a317-aac4f742bc41 to disappear Jan 27 20:07:57.853: INFO: Pod test-ca143f68-6aa8-41be-a317-aac4f742bc41 no longer exists Jan 27 20:07:58.061: INFO: Waiting for pod test-e59cb9ff-100b-4cbe-a2ad-ed9e29c5e697 to disappear Jan 27 20:07:58.061: INFO: Waiting for pod test-8f00a0cc-3eb3-4e19-bc6d-ef534d417420 to disappear Jan 27 20:07:58.065: INFO: Waiting for pod test-2a55bf15-e81d-4406-b5d2-4b52e642095d to disappear Jan 27 20:07:58.071: INFO: Waiting for pod test-0dc44719-a824-41da-82a6-bceb9102bdf8 to disappear Jan 27 20:07:58.073: INFO: Waiting for pod test-51b62a2a-8690-421a-8bca-aba20d620869 to disappear Jan 27 20:07:58.076: INFO: Waiting for pod test-bd46cb0f-56c0-4eca-9a6e-837e682c3223 to disappear Jan 27 20:07:58.080: INFO: Waiting for pod test-5ea3740c-6e8f-40ed-9ce6-89309eaeca11 to disappear Jan 27 20:07:58.084: INFO: Waiting for pod test-96cdffcd-7b01-42f4-b302-cf9278b28fe3 to disappear Jan 27 20:07:58.086: INFO: Waiting for pod test-5e8aa231-5288-4667-ae2a-4ae90b55a6e5 to disappear Jan 27 20:07:58.093: INFO: Pod test-8f00a0cc-3eb3-4e19-bc6d-ef534d417420 no longer exists Jan 27 20:07:58.093: INFO: Pod test-e59cb9ff-100b-4cbe-a2ad-ed9e29c5e697 no longer exists Jan 27 20:07:58.097: INFO: Pod test-2a55bf15-e81d-4406-b5d2-4b52e642095d no longer exists Jan 27 20:07:58.102: INFO: Pod test-0dc44719-a824-41da-82a6-bceb9102bdf8 no longer exists Jan 27 20:07:58.104: INFO: Pod test-51b62a2a-8690-421a-8bca-aba20d620869 no longer exists Jan 27 20:07:58.107: INFO: Pod test-bd46cb0f-56c0-4eca-9a6e-837e682c3223 no longer exists Jan 27 20:07:58.111: INFO: Pod test-5ea3740c-6e8f-40ed-9ce6-89309eaeca11 no longer exists Jan 27 20:07:58.115: INFO: Pod test-96cdffcd-7b01-42f4-b302-cf9278b28fe3 no longer exists Jan 27 20:07:58.117: INFO: Pod test-5e8aa231-5288-4667-ae2a-4ae90b55a6e5 no longer exists [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 27 20:07:58.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "density-test-windows-6866" for this suite. �[38;5;243m01/27/23 20:07:58.152�[0m {"msg":"PASSED [sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","completed":20,"skipped":1826,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [40.676 seconds]�[0m [sig-windows] [Feature:Windows] Density [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m create a batch of pods �[38;5;243mtest/e2e/windows/density.go:47�[0m latency/resource should be within limit when create 10 pods with 0s interval �[38;5;243mtest/e2e/windows/density.go:68�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:07:17.512�[0m Jan 27 20:07:17.512: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename density-test-windows �[38;5;243m01/27/23 20:07:17.514�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:07:17.615�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:07:17.677�[0m [It] latency/resource should be within limit when create 10 pods with 0s interval test/e2e/windows/density.go:68 �[1mSTEP:�[0m Creating a batch of pods �[38;5;243m01/27/23 20:07:17.741�[0m �[1mSTEP:�[0m Waiting for all Pods to be observed by the watch... �[38;5;243m01/27/23 20:07:17.741�[0m Jan 27 20:07:27.781: INFO: Waiting for pod test-ca143f68-6aa8-41be-a317-aac4f742bc41 to disappear Jan 27 20:07:27.792: INFO: Waiting for pod test-e59cb9ff-100b-4cbe-a2ad-ed9e29c5e697 to disappear Jan 27 20:07:27.792: INFO: Waiting for pod test-8f00a0cc-3eb3-4e19-bc6d-ef534d417420 to disappear Jan 27 20:07:27.793: INFO: Waiting for pod test-2a55bf15-e81d-4406-b5d2-4b52e642095d to disappear Jan 27 20:07:27.793: INFO: Waiting for pod test-0dc44719-a824-41da-82a6-bceb9102bdf8 to disappear Jan 27 20:07:27.817: INFO: Pod test-ca143f68-6aa8-41be-a317-aac4f742bc41 still exists Jan 27 20:07:27.817: INFO: Waiting for pod test-51b62a2a-8690-421a-8bca-aba20d620869 to disappear Jan 27 20:07:27.820: INFO: Waiting for pod test-bd46cb0f-56c0-4eca-9a6e-837e682c3223 to disappear Jan 27 20:07:27.880: INFO: Waiting for pod test-5ea3740c-6e8f-40ed-9ce6-89309eaeca11 to disappear Jan 27 20:07:27.880: INFO: Waiting for pod test-96cdffcd-7b01-42f4-b302-cf9278b28fe3 to disappear Jan 27 20:07:27.909: INFO: Waiting for pod test-5e8aa231-5288-4667-ae2a-4ae90b55a6e5 to disappear Jan 27 20:07:28.060: INFO: Pod test-e59cb9ff-100b-4cbe-a2ad-ed9e29c5e697 still exists Jan 27 20:07:28.061: INFO: Pod test-8f00a0cc-3eb3-4e19-bc6d-ef534d417420 still exists Jan 27 20:07:28.065: INFO: Pod test-2a55bf15-e81d-4406-b5d2-4b52e642095d still exists Jan 27 20:07:28.069: INFO: Pod test-0dc44719-a824-41da-82a6-bceb9102bdf8 still exists Jan 27 20:07:28.072: INFO: Pod test-51b62a2a-8690-421a-8bca-aba20d620869 still exists Jan 27 20:07:28.076: INFO: Pod test-bd46cb0f-56c0-4eca-9a6e-837e682c3223 still exists Jan 27 20:07:28.079: INFO: Pod test-5ea3740c-6e8f-40ed-9ce6-89309eaeca11 still exists Jan 27 20:07:28.083: INFO: Pod test-96cdffcd-7b01-42f4-b302-cf9278b28fe3 still exists Jan 27 20:07:28.086: INFO: Pod test-5e8aa231-5288-4667-ae2a-4ae90b55a6e5 still exists Jan 27 20:07:57.820: INFO: Waiting for pod test-ca143f68-6aa8-41be-a317-aac4f742bc41 to disappear Jan 27 20:07:57.853: INFO: Pod test-ca143f68-6aa8-41be-a317-aac4f742bc41 no longer exists Jan 27 20:07:58.061: INFO: Waiting for pod test-e59cb9ff-100b-4cbe-a2ad-ed9e29c5e697 to disappear Jan 27 20:07:58.061: INFO: Waiting for pod test-8f00a0cc-3eb3-4e19-bc6d-ef534d417420 to disappear Jan 27 20:07:58.065: INFO: Waiting for pod test-2a55bf15-e81d-4406-b5d2-4b52e642095d to disappear Jan 27 20:07:58.071: INFO: Waiting for pod test-0dc44719-a824-41da-82a6-bceb9102bdf8 to disappear Jan 27 20:07:58.073: INFO: Waiting for pod test-51b62a2a-8690-421a-8bca-aba20d620869 to disappear Jan 27 20:07:58.076: INFO: Waiting for pod test-bd46cb0f-56c0-4eca-9a6e-837e682c3223 to disappear Jan 27 20:07:58.080: INFO: Waiting for pod test-5ea3740c-6e8f-40ed-9ce6-89309eaeca11 to disappear Jan 27 20:07:58.084: INFO: Waiting for pod test-96cdffcd-7b01-42f4-b302-cf9278b28fe3 to disappear Jan 27 20:07:58.086: INFO: Waiting for pod test-5e8aa231-5288-4667-ae2a-4ae90b55a6e5 to disappear Jan 27 20:07:58.093: INFO: Pod test-8f00a0cc-3eb3-4e19-bc6d-ef534d417420 no longer exists Jan 27 20:07:58.093: INFO: Pod test-e59cb9ff-100b-4cbe-a2ad-ed9e29c5e697 no longer exists Jan 27 20:07:58.097: INFO: Pod test-2a55bf15-e81d-4406-b5d2-4b52e642095d no longer exists Jan 27 20:07:58.102: INFO: Pod test-0dc44719-a824-41da-82a6-bceb9102bdf8 no longer exists Jan 27 20:07:58.104: INFO: Pod test-51b62a2a-8690-421a-8bca-aba20d620869 no longer exists Jan 27 20:07:58.107: INFO: Pod test-bd46cb0f-56c0-4eca-9a6e-837e682c3223 no longer exists Jan 27 20:07:58.111: INFO: Pod test-5ea3740c-6e8f-40ed-9ce6-89309eaeca11 no longer exists Jan 27 20:07:58.115: INFO: Pod test-96cdffcd-7b01-42f4-b302-cf9278b28fe3 no longer exists Jan 27 20:07:58.117: INFO: Pod test-5e8aa231-5288-4667-ae2a-4ae90b55a6e5 no longer exists [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 27 20:07:58.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "density-test-windows-6866" for this suite. �[38;5;243m01/27/23 20:07:58.152�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith autoscaling disabled�[0m �[1mshouldn't scale up�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:07:58.198�[0m Jan 27 20:07:58.198: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:07:58.2�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:07:58.301�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:07:58.363�[0m [It] shouldn't scale up test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/27/23 20:07:58.425�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m01/27/23 20:07:58.425�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-5538 �[38;5;243m01/27/23 20:07:58.48�[0m I0127 20:07:58.515548 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-5538, replica count: 1 I0127 20:08:08.570099 13 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:08:08.57�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-5538 �[38;5;243m01/27/23 20:08:08.619�[0m I0127 20:08:08.658448 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-5538, replica count: 1 I0127 20:08:18.712431 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:08:23.713: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 27 20:08:23.746: INFO: RC consumer: consume 110 millicores in total Jan 27 20:08:23.746: INFO: RC consumer: setting consumption to 110 millicores in total Jan 27 20:08:23.746: INFO: RC consumer: sending request to consume 110 millicores Jan 27 20:08:23.746: INFO: RC consumer: consume 0 MB in total Jan 27 20:08:23.746: INFO: RC consumer: disabling mem consumption Jan 27 20:08:23.746: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 20:08:23.746: INFO: RC consumer: consume custom metric 0 in total Jan 27 20:08:23.746: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m trying to trigger scale up �[38;5;243m01/27/23 20:08:23.782�[0m Jan 27 20:08:23.783: INFO: RC consumer: consume 880 millicores in total Jan 27 20:08:23.827: INFO: RC consumer: setting consumption to 880 millicores in total Jan 27 20:08:23.860: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:08:23.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 27 20:08:33.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:08:33.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 27 20:08:43.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:08:43.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cf8f0} Jan 27 20:08:53.828: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:08:53.828: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:08:53.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:08:53.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003581980} Jan 27 20:09:03.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:03.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0037ff3a0} Jan 27 20:09:13.927: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:13.959: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003581a80} Jan 27 20:09:23.880: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:09:23.880: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:09:23.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:23.961: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cfbf0} Jan 27 20:09:33.926: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:33.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cff70} Jan 27 20:09:43.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:43.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020ce260} Jan 27 20:09:53.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:53.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020ce580} Jan 27 20:09:54.328: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:09:54.328: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:10:03.927: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:03.960: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cf440} Jan 27 20:10:13.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:13.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cf720} Jan 27 20:10:23.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:23.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cfa10} Jan 27 20:10:24.626: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:10:24.627: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:10:33.927: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:33.959: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00289c460} Jan 27 20:10:43.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:43.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00289c540} Jan 27 20:10:53.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:53.958: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0037fe270} Jan 27 20:10:55.485: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:10:55.486: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:11:03.926: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:03.958: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0037fe4f0} Jan 27 20:11:13.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:13.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00289c8a0} Jan 27 20:11:23.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:23.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00289cb90} Jan 27 20:11:26.373: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:11:26.373: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:11:33.927: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:33.959: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003580190} Jan 27 20:11:43.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:43.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003580120} Jan 27 20:11:53.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:53.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0037fe2a0} Jan 27 20:11:53.989: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:54.021: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0035803e0} Jan 27 20:11:54.021: INFO: Number of replicas was stable over 3m30s �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m01/27/23 20:11:54.021�[0m Jan 27 20:11:54.021: INFO: time waited for scale up: 3m30.193201212s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m01/27/23 20:11:54.021�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/27/23 20:11:54.088�[0m Jan 27 20:11:54.088: INFO: RC consumer: stopping metric consumer Jan 27 20:11:54.088: INFO: RC consumer: stopping CPU consumer Jan 27 20:11:54.088: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-5538, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:12:04.091�[0m Jan 27 20:12:04.211: INFO: Deleting Deployment.apps consumer took: 36.495162ms Jan 27 20:12:04.311: INFO: Terminating Deployment.apps consumer pods took: 100.287753ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-5538, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:12:05.967�[0m Jan 27 20:12:06.085: INFO: Deleting ReplicationController consumer-ctrl took: 35.105401ms Jan 27 20:12:06.186: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.862634ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 27 20:12:08.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-5538" for this suite. �[38;5;243m01/27/23 20:12:08.291�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up","completed":21,"skipped":1932,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [250.128 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with autoscaling disabled �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:136�[0m shouldn't scale up �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:07:58.198�[0m Jan 27 20:07:58.198: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:07:58.2�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:07:58.301�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:07:58.363�[0m [It] shouldn't scale up test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/27/23 20:07:58.425�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m01/27/23 20:07:58.425�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-5538 �[38;5;243m01/27/23 20:07:58.48�[0m I0127 20:07:58.515548 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-5538, replica count: 1 I0127 20:08:08.570099 13 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:08:08.57�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-5538 �[38;5;243m01/27/23 20:08:08.619�[0m I0127 20:08:08.658448 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-5538, replica count: 1 I0127 20:08:18.712431 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:08:23.713: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 27 20:08:23.746: INFO: RC consumer: consume 110 millicores in total Jan 27 20:08:23.746: INFO: RC consumer: setting consumption to 110 millicores in total Jan 27 20:08:23.746: INFO: RC consumer: sending request to consume 110 millicores Jan 27 20:08:23.746: INFO: RC consumer: consume 0 MB in total Jan 27 20:08:23.746: INFO: RC consumer: disabling mem consumption Jan 27 20:08:23.746: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 20:08:23.746: INFO: RC consumer: consume custom metric 0 in total Jan 27 20:08:23.746: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m trying to trigger scale up �[38;5;243m01/27/23 20:08:23.782�[0m Jan 27 20:08:23.783: INFO: RC consumer: consume 880 millicores in total Jan 27 20:08:23.827: INFO: RC consumer: setting consumption to 880 millicores in total Jan 27 20:08:23.860: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:08:23.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 27 20:08:33.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:08:33.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 27 20:08:43.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:08:43.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cf8f0} Jan 27 20:08:53.828: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:08:53.828: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:08:53.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:08:53.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003581980} Jan 27 20:09:03.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:03.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0037ff3a0} Jan 27 20:09:13.927: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:13.959: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003581a80} Jan 27 20:09:23.880: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:09:23.880: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:09:23.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:23.961: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cfbf0} Jan 27 20:09:33.926: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:33.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cff70} Jan 27 20:09:43.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:43.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020ce260} Jan 27 20:09:53.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:09:53.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020ce580} Jan 27 20:09:54.328: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:09:54.328: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:10:03.927: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:03.960: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cf440} Jan 27 20:10:13.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:13.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cf720} Jan 27 20:10:23.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:23.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0020cfa10} Jan 27 20:10:24.626: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:10:24.627: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:10:33.927: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:33.959: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00289c460} Jan 27 20:10:43.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:43.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00289c540} Jan 27 20:10:53.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:10:53.958: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0037fe270} Jan 27 20:10:55.485: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:10:55.486: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:11:03.926: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:03.958: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0037fe4f0} Jan 27 20:11:13.924: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:13.956: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00289c8a0} Jan 27 20:11:23.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:23.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00289cb90} Jan 27 20:11:26.373: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:11:26.373: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5538/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:11:33.927: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:33.959: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003580190} Jan 27 20:11:43.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:43.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003580120} Jan 27 20:11:53.925: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:53.957: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0037fe2a0} Jan 27 20:11:53.989: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 27 20:11:54.021: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0035803e0} Jan 27 20:11:54.021: INFO: Number of replicas was stable over 3m30s �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m01/27/23 20:11:54.021�[0m Jan 27 20:11:54.021: INFO: time waited for scale up: 3m30.193201212s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m01/27/23 20:11:54.021�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/27/23 20:11:54.088�[0m Jan 27 20:11:54.088: INFO: RC consumer: stopping metric consumer Jan 27 20:11:54.088: INFO: RC consumer: stopping CPU consumer Jan 27 20:11:54.088: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-5538, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:12:04.091�[0m Jan 27 20:12:04.211: INFO: Deleting Deployment.apps consumer took: 36.495162ms Jan 27 20:12:04.311: INFO: Terminating Deployment.apps consumer pods took: 100.287753ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-5538, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:12:05.967�[0m Jan 27 20:12:06.085: INFO: Deleting ReplicationController consumer-ctrl took: 35.105401ms Jan 27 20:12:06.186: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.862634ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 27 20:12:08.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-5538" for this suite. �[38;5;243m01/27/23 20:12:08.291�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:735�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:12:08.332�[0m Jan 27 20:12:08.333: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:12:08.334�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:12:08.432�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:12:08.494�[0m [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] test/e2e/apimachinery/garbage_collector.go:735 �[1mSTEP:�[0m create the rc1 �[38;5;243m01/27/23 20:12:08.59�[0m �[1mSTEP:�[0m create the rc2 �[38;5;243m01/27/23 20:12:08.625�[0m �[1mSTEP:�[0m set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[38;5;243m01/27/23 20:12:13.716�[0m �[1mSTEP:�[0m delete the rc simpletest-rc-to-be-deleted �[38;5;243m01/27/23 20:12:15.62�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m01/27/23 20:12:15.663�[0m Jan 27 20:12:20.744: INFO: 70 pods remaining Jan 27 20:12:20.744: INFO: 70 pods has nil DeletionTimestamp Jan 27 20:12:20.744: INFO: �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 20:12:25.738�[0m Jan 27 20:12:25.841: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 20:12:25.873: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 31.99359ms Jan 27 20:12:25.873: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 20:12:25.873: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 20:12:26.263: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 27 20:12:26.263: INFO: Deleting pod "simpletest-rc-to-be-deleted-28qbb" in namespace "gc-8544" Jan 27 20:12:26.313: INFO: Deleting pod "simpletest-rc-to-be-deleted-2h7lp" in namespace "gc-8544" Jan 27 20:12:26.359: INFO: Deleting pod "simpletest-rc-to-be-deleted-2svmd" in namespace "gc-8544" Jan 27 20:12:26.400: INFO: Deleting pod "simpletest-rc-to-be-deleted-2tc4f" in namespace "gc-8544" Jan 27 20:12:26.444: INFO: Deleting pod "simpletest-rc-to-be-deleted-42mqf" in namespace "gc-8544" Jan 27 20:12:26.492: INFO: Deleting pod "simpletest-rc-to-be-deleted-47tvl" in namespace "gc-8544" Jan 27 20:12:26.533: INFO: Deleting pod "simpletest-rc-to-be-deleted-496nt" in namespace "gc-8544" Jan 27 20:12:26.578: INFO: Deleting pod "simpletest-rc-to-be-deleted-4pn72" in namespace "gc-8544" Jan 27 20:12:26.623: INFO: Deleting pod "simpletest-rc-to-be-deleted-55qr9" in namespace "gc-8544" Jan 27 20:12:26.672: INFO: Deleting pod "simpletest-rc-to-be-deleted-565st" in namespace "gc-8544" Jan 27 20:12:26.720: INFO: Deleting pod "simpletest-rc-to-be-deleted-5gp5k" in namespace "gc-8544" Jan 27 20:12:26.761: INFO: Deleting pod "simpletest-rc-to-be-deleted-5h247" in namespace "gc-8544" Jan 27 20:12:26.804: INFO: Deleting pod "simpletest-rc-to-be-deleted-5v2fl" in namespace "gc-8544" Jan 27 20:12:26.848: INFO: Deleting pod "simpletest-rc-to-be-deleted-662cx" in namespace "gc-8544" Jan 27 20:12:26.889: INFO: Deleting pod "simpletest-rc-to-be-deleted-66ljv" in namespace "gc-8544" Jan 27 20:12:26.928: INFO: Deleting pod "simpletest-rc-to-be-deleted-6fsmb" in namespace "gc-8544" Jan 27 20:12:26.969: INFO: Deleting pod "simpletest-rc-to-be-deleted-6h7tk" in namespace "gc-8544" Jan 27 20:12:27.012: INFO: Deleting pod "simpletest-rc-to-be-deleted-6p5l8" in namespace "gc-8544" Jan 27 20:12:27.056: INFO: Deleting pod "simpletest-rc-to-be-deleted-72fml" in namespace "gc-8544" Jan 27 20:12:27.112: INFO: Deleting pod "simpletest-rc-to-be-deleted-72r5s" in namespace "gc-8544" Jan 27 20:12:27.153: INFO: Deleting pod "simpletest-rc-to-be-deleted-76stj" in namespace "gc-8544" Jan 27 20:12:27.203: INFO: Deleting pod "simpletest-rc-to-be-deleted-778g5" in namespace "gc-8544" Jan 27 20:12:27.252: INFO: Deleting pod "simpletest-rc-to-be-deleted-78fsm" in namespace "gc-8544" Jan 27 20:12:27.290: INFO: Deleting pod "simpletest-rc-to-be-deleted-7nxnh" in namespace "gc-8544" Jan 27 20:12:27.336: INFO: Deleting pod "simpletest-rc-to-be-deleted-84f6b" in namespace "gc-8544" Jan 27 20:12:27.378: INFO: Deleting pod "simpletest-rc-to-be-deleted-879x6" in namespace "gc-8544" Jan 27 20:12:27.416: INFO: Deleting pod "simpletest-rc-to-be-deleted-89fwb" in namespace "gc-8544" Jan 27 20:12:27.457: INFO: Deleting pod "simpletest-rc-to-be-deleted-8d7th" in namespace "gc-8544" Jan 27 20:12:27.554: INFO: Deleting pod "simpletest-rc-to-be-deleted-8fvmm" in namespace "gc-8544" Jan 27 20:12:27.599: INFO: Deleting pod "simpletest-rc-to-be-deleted-8gk8k" in namespace "gc-8544" Jan 27 20:12:27.641: INFO: Deleting pod "simpletest-rc-to-be-deleted-8lw6m" in namespace "gc-8544" Jan 27 20:12:27.688: INFO: Deleting pod "simpletest-rc-to-be-deleted-8np6c" in namespace "gc-8544" Jan 27 20:12:27.738: INFO: Deleting pod "simpletest-rc-to-be-deleted-8wtcb" in namespace "gc-8544" Jan 27 20:12:27.781: INFO: Deleting pod "simpletest-rc-to-be-deleted-9d8d6" in namespace "gc-8544" Jan 27 20:12:27.825: INFO: Deleting pod "simpletest-rc-to-be-deleted-9pzlk" in namespace "gc-8544" Jan 27 20:12:27.870: INFO: Deleting pod "simpletest-rc-to-be-deleted-9vdjb" in namespace "gc-8544" Jan 27 20:12:27.912: INFO: Deleting pod "simpletest-rc-to-be-deleted-9vkch" in namespace "gc-8544" Jan 27 20:12:27.953: INFO: Deleting pod "simpletest-rc-to-be-deleted-9zxjt" in namespace "gc-8544" Jan 27 20:12:27.998: INFO: Deleting pod "simpletest-rc-to-be-deleted-bpc92" in namespace "gc-8544" Jan 27 20:12:28.042: INFO: Deleting pod "simpletest-rc-to-be-deleted-c25kh" in namespace "gc-8544" Jan 27 20:12:28.083: INFO: Deleting pod "simpletest-rc-to-be-deleted-c55qr" in namespace "gc-8544" Jan 27 20:12:28.126: INFO: Deleting pod "simpletest-rc-to-be-deleted-c7dvj" in namespace "gc-8544" Jan 27 20:12:28.167: INFO: Deleting pod "simpletest-rc-to-be-deleted-cpzpb" in namespace "gc-8544" Jan 27 20:12:28.216: INFO: Deleting pod "simpletest-rc-to-be-deleted-d8jr9" in namespace "gc-8544" Jan 27 20:12:28.259: INFO: Deleting pod "simpletest-rc-to-be-deleted-d9tfg" in namespace "gc-8544" Jan 27 20:12:28.301: INFO: Deleting pod "simpletest-rc-to-be-deleted-drqc7" in namespace "gc-8544" Jan 27 20:12:28.347: INFO: Deleting pod "simpletest-rc-to-be-deleted-dvr46" in namespace "gc-8544" Jan 27 20:12:28.397: INFO: Deleting pod "simpletest-rc-to-be-deleted-dxqzf" in namespace "gc-8544" Jan 27 20:12:28.442: INFO: Deleting pod "simpletest-rc-to-be-deleted-fr6qj" in namespace "gc-8544" Jan 27 20:12:28.490: INFO: Deleting pod "simpletest-rc-to-be-deleted-fwktb" in namespace "gc-8544" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:12:28.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-8544" for this suite. �[38;5;243m01/27/23 20:12:28.565�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","completed":22,"skipped":1978,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [20.271 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:735�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:12:08.332�[0m Jan 27 20:12:08.333: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:12:08.334�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:12:08.432�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:12:08.494�[0m [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] test/e2e/apimachinery/garbage_collector.go:735 �[1mSTEP:�[0m create the rc1 �[38;5;243m01/27/23 20:12:08.59�[0m �[1mSTEP:�[0m create the rc2 �[38;5;243m01/27/23 20:12:08.625�[0m �[1mSTEP:�[0m set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[38;5;243m01/27/23 20:12:13.716�[0m �[1mSTEP:�[0m delete the rc simpletest-rc-to-be-deleted �[38;5;243m01/27/23 20:12:15.62�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m01/27/23 20:12:15.663�[0m Jan 27 20:12:20.744: INFO: 70 pods remaining Jan 27 20:12:20.744: INFO: 70 pods has nil DeletionTimestamp Jan 27 20:12:20.744: INFO: �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 20:12:25.738�[0m Jan 27 20:12:25.841: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 20:12:25.873: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 31.99359ms Jan 27 20:12:25.873: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 20:12:25.873: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 20:12:26.263: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 27 20:12:26.263: INFO: Deleting pod "simpletest-rc-to-be-deleted-28qbb" in namespace "gc-8544" Jan 27 20:12:26.313: INFO: Deleting pod "simpletest-rc-to-be-deleted-2h7lp" in namespace "gc-8544" Jan 27 20:12:26.359: INFO: Deleting pod "simpletest-rc-to-be-deleted-2svmd" in namespace "gc-8544" Jan 27 20:12:26.400: INFO: Deleting pod "simpletest-rc-to-be-deleted-2tc4f" in namespace "gc-8544" Jan 27 20:12:26.444: INFO: Deleting pod "simpletest-rc-to-be-deleted-42mqf" in namespace "gc-8544" Jan 27 20:12:26.492: INFO: Deleting pod "simpletest-rc-to-be-deleted-47tvl" in namespace "gc-8544" Jan 27 20:12:26.533: INFO: Deleting pod "simpletest-rc-to-be-deleted-496nt" in namespace "gc-8544" Jan 27 20:12:26.578: INFO: Deleting pod "simpletest-rc-to-be-deleted-4pn72" in namespace "gc-8544" Jan 27 20:12:26.623: INFO: Deleting pod "simpletest-rc-to-be-deleted-55qr9" in namespace "gc-8544" Jan 27 20:12:26.672: INFO: Deleting pod "simpletest-rc-to-be-deleted-565st" in namespace "gc-8544" Jan 27 20:12:26.720: INFO: Deleting pod "simpletest-rc-to-be-deleted-5gp5k" in namespace "gc-8544" Jan 27 20:12:26.761: INFO: Deleting pod "simpletest-rc-to-be-deleted-5h247" in namespace "gc-8544" Jan 27 20:12:26.804: INFO: Deleting pod "simpletest-rc-to-be-deleted-5v2fl" in namespace "gc-8544" Jan 27 20:12:26.848: INFO: Deleting pod "simpletest-rc-to-be-deleted-662cx" in namespace "gc-8544" Jan 27 20:12:26.889: INFO: Deleting pod "simpletest-rc-to-be-deleted-66ljv" in namespace "gc-8544" Jan 27 20:12:26.928: INFO: Deleting pod "simpletest-rc-to-be-deleted-6fsmb" in namespace "gc-8544" Jan 27 20:12:26.969: INFO: Deleting pod "simpletest-rc-to-be-deleted-6h7tk" in namespace "gc-8544" Jan 27 20:12:27.012: INFO: Deleting pod "simpletest-rc-to-be-deleted-6p5l8" in namespace "gc-8544" Jan 27 20:12:27.056: INFO: Deleting pod "simpletest-rc-to-be-deleted-72fml" in namespace "gc-8544" Jan 27 20:12:27.112: INFO: Deleting pod "simpletest-rc-to-be-deleted-72r5s" in namespace "gc-8544" Jan 27 20:12:27.153: INFO: Deleting pod "simpletest-rc-to-be-deleted-76stj" in namespace "gc-8544" Jan 27 20:12:27.203: INFO: Deleting pod "simpletest-rc-to-be-deleted-778g5" in namespace "gc-8544" Jan 27 20:12:27.252: INFO: Deleting pod "simpletest-rc-to-be-deleted-78fsm" in namespace "gc-8544" Jan 27 20:12:27.290: INFO: Deleting pod "simpletest-rc-to-be-deleted-7nxnh" in namespace "gc-8544" Jan 27 20:12:27.336: INFO: Deleting pod "simpletest-rc-to-be-deleted-84f6b" in namespace "gc-8544" Jan 27 20:12:27.378: INFO: Deleting pod "simpletest-rc-to-be-deleted-879x6" in namespace "gc-8544" Jan 27 20:12:27.416: INFO: Deleting pod "simpletest-rc-to-be-deleted-89fwb" in namespace "gc-8544" Jan 27 20:12:27.457: INFO: Deleting pod "simpletest-rc-to-be-deleted-8d7th" in namespace "gc-8544" Jan 27 20:12:27.554: INFO: Deleting pod "simpletest-rc-to-be-deleted-8fvmm" in namespace "gc-8544" Jan 27 20:12:27.599: INFO: Deleting pod "simpletest-rc-to-be-deleted-8gk8k" in namespace "gc-8544" Jan 27 20:12:27.641: INFO: Deleting pod "simpletest-rc-to-be-deleted-8lw6m" in namespace "gc-8544" Jan 27 20:12:27.688: INFO: Deleting pod "simpletest-rc-to-be-deleted-8np6c" in namespace "gc-8544" Jan 27 20:12:27.738: INFO: Deleting pod "simpletest-rc-to-be-deleted-8wtcb" in namespace "gc-8544" Jan 27 20:12:27.781: INFO: Deleting pod "simpletest-rc-to-be-deleted-9d8d6" in namespace "gc-8544" Jan 27 20:12:27.825: INFO: Deleting pod "simpletest-rc-to-be-deleted-9pzlk" in namespace "gc-8544" Jan 27 20:12:27.870: INFO: Deleting pod "simpletest-rc-to-be-deleted-9vdjb" in namespace "gc-8544" Jan 27 20:12:27.912: INFO: Deleting pod "simpletest-rc-to-be-deleted-9vkch" in namespace "gc-8544" Jan 27 20:12:27.953: INFO: Deleting pod "simpletest-rc-to-be-deleted-9zxjt" in namespace "gc-8544" Jan 27 20:12:27.998: INFO: Deleting pod "simpletest-rc-to-be-deleted-bpc92" in namespace "gc-8544" Jan 27 20:12:28.042: INFO: Deleting pod "simpletest-rc-to-be-deleted-c25kh" in namespace "gc-8544" Jan 27 20:12:28.083: INFO: Deleting pod "simpletest-rc-to-be-deleted-c55qr" in namespace "gc-8544" Jan 27 20:12:28.126: INFO: Deleting pod "simpletest-rc-to-be-deleted-c7dvj" in namespace "gc-8544" Jan 27 20:12:28.167: INFO: Deleting pod "simpletest-rc-to-be-deleted-cpzpb" in namespace "gc-8544" Jan 27 20:12:28.216: INFO: Deleting pod "simpletest-rc-to-be-deleted-d8jr9" in namespace "gc-8544" Jan 27 20:12:28.259: INFO: Deleting pod "simpletest-rc-to-be-deleted-d9tfg" in namespace "gc-8544" Jan 27 20:12:28.301: INFO: Deleting pod "simpletest-rc-to-be-deleted-drqc7" in namespace "gc-8544" Jan 27 20:12:28.347: INFO: Deleting pod "simpletest-rc-to-be-deleted-dvr46" in namespace "gc-8544" Jan 27 20:12:28.397: INFO: Deleting pod "simpletest-rc-to-be-deleted-dxqzf" in namespace "gc-8544" Jan 27 20:12:28.442: INFO: Deleting pod "simpletest-rc-to-be-deleted-fr6qj" in namespace "gc-8544" Jan 27 20:12:28.490: INFO: Deleting pod "simpletest-rc-to-be-deleted-fwktb" in namespace "gc-8544" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:12:28.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-8544" for this suite. �[38;5;243m01/27/23 20:12:28.565�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould fail substituting values in a volume subpath with backticks [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/common/node/expansion.go:151�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:12:28.615�[0m Jan 27 20:12:28.615: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/27/23 20:12:28.616�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:12:28.714�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:12:28.776�[0m [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:151 Jan 27 20:12:28.875: INFO: Waiting up to 2m0s for pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca" in namespace "var-expansion-5980" to be "container 0 failed with reason CreateContainerConfigError" Jan 27 20:12:28.914: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 39.148516ms Jan 27 20:12:30.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07125463s Jan 27 20:12:32.950: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074980387s Jan 27 20:12:34.948: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072712138s Jan 27 20:12:36.946: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070754748s Jan 27 20:12:38.948: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072738385s Jan 27 20:12:40.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.071448419s Jan 27 20:12:42.948: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.072765079s Jan 27 20:12:44.948: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.07253221s Jan 27 20:12:46.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 18.071303382s Jan 27 20:12:48.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 20.071222645s Jan 27 20:12:50.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 22.071820344s Jan 27 20:12:50.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca" satisfied condition "container 0 failed with reason CreateContainerConfigError" Jan 27 20:12:50.947: INFO: Deleting pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca" in namespace "var-expansion-5980" Jan 27 20:12:50.985: INFO: Wait up to 5m0s for pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 27 20:12:59.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-5980" for this suite. �[38;5;243m01/27/23 20:12:59.084�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","completed":23,"skipped":2134,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [30.511 seconds]�[0m [sig-node] Variable Expansion �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should fail substituting values in a volume subpath with backticks [Slow] [Conformance] �[38;5;243mtest/e2e/common/node/expansion.go:151�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:12:28.615�[0m Jan 27 20:12:28.615: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/27/23 20:12:28.616�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:12:28.714�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:12:28.776�[0m [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:151 Jan 27 20:12:28.875: INFO: Waiting up to 2m0s for pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca" in namespace "var-expansion-5980" to be "container 0 failed with reason CreateContainerConfigError" Jan 27 20:12:28.914: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 39.148516ms Jan 27 20:12:30.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07125463s Jan 27 20:12:32.950: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074980387s Jan 27 20:12:34.948: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072712138s Jan 27 20:12:36.946: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070754748s Jan 27 20:12:38.948: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072738385s Jan 27 20:12:40.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.071448419s Jan 27 20:12:42.948: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.072765079s Jan 27 20:12:44.948: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.07253221s Jan 27 20:12:46.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 18.071303382s Jan 27 20:12:48.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 20.071222645s Jan 27 20:12:50.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca": Phase="Pending", Reason="", readiness=false. Elapsed: 22.071820344s Jan 27 20:12:50.947: INFO: Pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca" satisfied condition "container 0 failed with reason CreateContainerConfigError" Jan 27 20:12:50.947: INFO: Deleting pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca" in namespace "var-expansion-5980" Jan 27 20:12:50.985: INFO: Wait up to 5m0s for pod "var-expansion-23e89aba-c1c2-486c-97a1-e5f2f3228aca" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 27 20:12:59.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-5980" for this suite. �[38;5;243m01/27/23 20:12:59.084�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicationController�[0m �[1mShould scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:64�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:12:59.129�[0m Jan 27 20:12:59.129: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:12:59.131�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:12:59.23�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:12:59.292�[0m [It] Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:64 �[1mSTEP:�[0m Running consuming RC rc via /v1, Kind=ReplicationController with 5 replicas �[38;5;243m01/27/23 20:12:59.355�[0m �[1mSTEP:�[0m creating replication controller rc in namespace horizontal-pod-autoscaling-8430 �[38;5;243m01/27/23 20:12:59.401�[0m I0127 20:12:59.435608 13 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-8430, replica count: 5 I0127 20:13:09.487316 13 runners.go:193] rc Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:13:09.487�[0m �[1mSTEP:�[0m creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-8430 �[38;5;243m01/27/23 20:13:09.531�[0m I0127 20:13:09.569296 13 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-8430, replica count: 1 I0127 20:13:19.623462 13 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:13:24.626: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Jan 27 20:13:24.658: INFO: RC rc: consume 325 millicores in total Jan 27 20:13:24.658: INFO: RC rc: setting consumption to 325 millicores in total Jan 27 20:13:24.658: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:13:24.658: INFO: RC rc: consume 0 MB in total Jan 27 20:13:24.658: INFO: RC rc: disabling mem consumption Jan 27 20:13:24.658: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:13:24.658: INFO: RC rc: consume custom metric 0 in total Jan 27 20:13:24.658: INFO: RC rc: disabling consumption of custom metric QPS Jan 27 20:13:24.725: INFO: waiting for 3 replicas (current: 5) Jan 27 20:13:44.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:13:54.755: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:13:54.755: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:14:04.759: INFO: waiting for 3 replicas (current: 5) Jan 27 20:14:24.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:14:24.798: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:14:24.798: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:14:44.762: INFO: waiting for 3 replicas (current: 5) Jan 27 20:14:54.841: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:14:54.841: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:15:04.765: INFO: waiting for 3 replicas (current: 5) Jan 27 20:15:24.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:15:24.882: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:15:24.882: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:15:44.762: INFO: waiting for 3 replicas (current: 5) Jan 27 20:15:54.922: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:15:54.922: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:16:04.761: INFO: waiting for 3 replicas (current: 5) Jan 27 20:16:24.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:16:24.962: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:16:24.962: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:16:44.761: INFO: waiting for 3 replicas (current: 5) Jan 27 20:16:55.005: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:16:55.005: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:17:04.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:17:24.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:17:25.045: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:17:25.046: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:17:44.763: INFO: waiting for 3 replicas (current: 5) Jan 27 20:17:55.085: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:17:55.086: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:18:04.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:18:24.759: INFO: waiting for 3 replicas (current: 5) Jan 27 20:18:25.126: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:18:25.126: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:18:44.761: INFO: waiting for 3 replicas (current: 3) Jan 27 20:18:44.792: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:18:44.824: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46584} Jan 27 20:18:54.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:18:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5822c} Jan 27 20:18:55.167: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:18:55.167: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:19:04.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:04.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af0594} Jan 27 20:19:14.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:14.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58424} Jan 27 20:19:24.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:24.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5869c} Jan 27 20:19:25.210: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:19:25.210: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:19:34.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:34.894: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c469b4} Jan 27 20:19:44.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:44.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46c1c} Jan 27 20:19:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4607c} Jan 27 20:19:55.249: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:19:55.249: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:20:04.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:04.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5824c} Jan 27 20:20:14.863: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:14.894: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4648c} Jan 27 20:20:24.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:24.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46704} Jan 27 20:20:25.289: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:20:25.290: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:20:34.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:34.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4699c} Jan 27 20:20:44.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:44.891: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5864c} Jan 27 20:20:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b588cc} Jan 27 20:20:55.330: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:20:55.330: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:21:04.859: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:04.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af03dc} Jan 27 20:21:14.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:14.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af0494} Jan 27 20:21:24.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:24.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af05c4} Jan 27 20:21:25.369: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:21:25.370: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:21:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:34.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46dc4} Jan 27 20:21:44.859: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:44.891: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46fac} Jan 27 20:21:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4607c} Jan 27 20:21:55.410: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:21:55.410: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:22:04.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:04.891: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5822c} Jan 27 20:22:14.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:14.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b582e4} Jan 27 20:22:24.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:24.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5839c} Jan 27 20:22:25.452: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:22:25.452: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:22:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:34.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58594} Jan 27 20:22:44.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:44.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4665c} Jan 27 20:22:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5881c} Jan 27 20:22:55.492: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:22:55.492: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:23:04.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:04.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58a2c} Jan 27 20:23:14.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:14.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af066c} Jan 27 20:23:24.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:24.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46994} Jan 27 20:23:25.533: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:23:25.533: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:23:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:34.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58c74} Jan 27 20:23:44.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:44.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58d3c} Jan 27 20:23:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b581b4} Jan 27 20:23:55.574: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:23:55.574: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:24:04.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:04.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af024c} Jan 27 20:24:14.859: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:14.891: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c460fc} Jan 27 20:24:24.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:24.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b584ac} Jan 27 20:24:25.615: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:24:25.616: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:24:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:34.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af05dc} Jan 27 20:24:44.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:44.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5879c} Jan 27 20:24:54.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:54.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af096c} Jan 27 20:24:55.655: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:24:55.655: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:25:04.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:04.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58a0c} Jan 27 20:25:14.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:14.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4651c} Jan 27 20:25:25.635: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:25.666: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4677c} Jan 27 20:25:25.695: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:25:25.696: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:25:34.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:34.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af0d9c} Jan 27 20:25:44.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:44.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af0f7c} Jan 27 20:25:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af01c4} Jan 27 20:25:55.737: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:25:55.737: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:26:04.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:04.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46094} Jan 27 20:26:14.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:14.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af078c} Jan 27 20:26:24.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:24.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af098c} Jan 27 20:26:25.776: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:26:25.777: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:26:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:34.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46374} Jan 27 20:26:44.859: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:44.891: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46444} Jan 27 20:26:54.856: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46524} Jan 27 20:26:55.825: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:26:55.825: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:27:04.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:04.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c467ac} Jan 27 20:27:14.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:14.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af127c} Jan 27 20:27:24.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:24.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af148c} Jan 27 20:27:25.867: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:27:25.867: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:27:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:34.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5807c} Jan 27 20:27:44.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:44.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46994} Jan 27 20:27:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:54.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af0094} Jan 27 20:27:55.907: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:27:55.907: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:28:04.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:04.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4622c} Jan 27 20:28:14.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:14.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5822c} Jan 27 20:28:24.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:24.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af02fc} Jan 27 20:28:25.947: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:28:25.947: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:28:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:34.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af056c} Jan 27 20:28:44.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:44.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c466ec} Jan 27 20:28:44.924: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:44.955: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4694c} Jan 27 20:28:44.956: INFO: Number of replicas was stable over 10m0s Jan 27 20:28:44.956: INFO: RC rc: consume 10 millicores in total Jan 27 20:28:44.956: INFO: RC rc: setting consumption to 10 millicores in total Jan 27 20:28:44.987: INFO: waiting for 1 replicas (current: 3) Jan 27 20:28:55.991: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:28:55.991: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:29:05.024: INFO: waiting for 1 replicas (current: 3) Jan 27 20:29:25.021: INFO: waiting for 1 replicas (current: 3) Jan 27 20:29:26.030: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:29:26.030: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:29:45.021: INFO: waiting for 1 replicas (current: 3) Jan 27 20:29:56.070: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:29:56.070: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:30:05.019: INFO: waiting for 1 replicas (current: 3) Jan 27 20:30:25.020: INFO: waiting for 1 replicas (current: 3) Jan 27 20:30:26.108: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:30:26.108: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:30:45.019: INFO: waiting for 1 replicas (current: 3) Jan 27 20:30:56.147: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:30:56.147: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:31:05.021: INFO: waiting for 1 replicas (current: 3) Jan 27 20:31:25.019: INFO: waiting for 1 replicas (current: 3) Jan 27 20:31:26.191: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:31:26.191: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:31:45.023: INFO: waiting for 1 replicas (current: 3) Jan 27 20:31:56.229: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:31:56.229: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:32:05.024: INFO: waiting for 1 replicas (current: 3) Jan 27 20:32:25.020: INFO: waiting for 1 replicas (current: 3) Jan 27 20:32:26.272: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:32:26.272: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:32:45.024: INFO: waiting for 1 replicas (current: 3) Jan 27 20:32:56.312: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:32:56.312: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:33:05.024: INFO: waiting for 1 replicas (current: 3) Jan 27 20:33:25.021: INFO: waiting for 1 replicas (current: 3) Jan 27 20:33:26.351: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:33:26.351: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:33:45.024: INFO: waiting for 1 replicas (current: 3) Jan 27 20:33:56.391: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:33:56.392: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:34:05.020: INFO: waiting for 1 replicas (current: 2) Jan 27 20:34:25.019: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc �[38;5;243m01/27/23 20:34:25.054�[0m Jan 27 20:34:25.054: INFO: RC rc: stopping metric consumer Jan 27 20:34:25.054: INFO: RC rc: stopping mem consumer Jan 27 20:34:25.054: INFO: RC rc: stopping CPU consumer �[1mSTEP:�[0m deleting ReplicationController rc in namespace horizontal-pod-autoscaling-8430, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:34:35.059�[0m Jan 27 20:34:35.181: INFO: Deleting ReplicationController rc took: 38.42465ms Jan 27 20:34:35.282: INFO: Terminating ReplicationController rc pods took: 101.154636ms �[1mSTEP:�[0m deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-8430, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:34:36.642�[0m Jan 27 20:34:36.759: INFO: Deleting ReplicationController rc-ctrl took: 35.211927ms Jan 27 20:34:36.860: INFO: Terminating ReplicationController rc-ctrl pods took: 101.053091ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 20:34:38.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-8430" for this suite. �[38;5;243m01/27/23 20:34:38.65�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","completed":24,"skipped":2149,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [1299.557 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicationController �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:59�[0m Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:64�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:12:59.129�[0m Jan 27 20:12:59.129: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:12:59.131�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:12:59.23�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:12:59.292�[0m [It] Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:64 �[1mSTEP:�[0m Running consuming RC rc via /v1, Kind=ReplicationController with 5 replicas �[38;5;243m01/27/23 20:12:59.355�[0m �[1mSTEP:�[0m creating replication controller rc in namespace horizontal-pod-autoscaling-8430 �[38;5;243m01/27/23 20:12:59.401�[0m I0127 20:12:59.435608 13 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-8430, replica count: 5 I0127 20:13:09.487316 13 runners.go:193] rc Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:13:09.487�[0m �[1mSTEP:�[0m creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-8430 �[38;5;243m01/27/23 20:13:09.531�[0m I0127 20:13:09.569296 13 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-8430, replica count: 1 I0127 20:13:19.623462 13 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:13:24.626: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Jan 27 20:13:24.658: INFO: RC rc: consume 325 millicores in total Jan 27 20:13:24.658: INFO: RC rc: setting consumption to 325 millicores in total Jan 27 20:13:24.658: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:13:24.658: INFO: RC rc: consume 0 MB in total Jan 27 20:13:24.658: INFO: RC rc: disabling mem consumption Jan 27 20:13:24.658: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:13:24.658: INFO: RC rc: consume custom metric 0 in total Jan 27 20:13:24.658: INFO: RC rc: disabling consumption of custom metric QPS Jan 27 20:13:24.725: INFO: waiting for 3 replicas (current: 5) Jan 27 20:13:44.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:13:54.755: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:13:54.755: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:14:04.759: INFO: waiting for 3 replicas (current: 5) Jan 27 20:14:24.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:14:24.798: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:14:24.798: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:14:44.762: INFO: waiting for 3 replicas (current: 5) Jan 27 20:14:54.841: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:14:54.841: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:15:04.765: INFO: waiting for 3 replicas (current: 5) Jan 27 20:15:24.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:15:24.882: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:15:24.882: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:15:44.762: INFO: waiting for 3 replicas (current: 5) Jan 27 20:15:54.922: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:15:54.922: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:16:04.761: INFO: waiting for 3 replicas (current: 5) Jan 27 20:16:24.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:16:24.962: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:16:24.962: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:16:44.761: INFO: waiting for 3 replicas (current: 5) Jan 27 20:16:55.005: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:16:55.005: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:17:04.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:17:24.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:17:25.045: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:17:25.046: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:17:44.763: INFO: waiting for 3 replicas (current: 5) Jan 27 20:17:55.085: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:17:55.086: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:18:04.758: INFO: waiting for 3 replicas (current: 5) Jan 27 20:18:24.759: INFO: waiting for 3 replicas (current: 5) Jan 27 20:18:25.126: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:18:25.126: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:18:44.761: INFO: waiting for 3 replicas (current: 3) Jan 27 20:18:44.792: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:18:44.824: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46584} Jan 27 20:18:54.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:18:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5822c} Jan 27 20:18:55.167: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:18:55.167: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:19:04.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:04.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af0594} Jan 27 20:19:14.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:14.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58424} Jan 27 20:19:24.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:24.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5869c} Jan 27 20:19:25.210: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:19:25.210: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:19:34.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:34.894: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c469b4} Jan 27 20:19:44.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:44.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46c1c} Jan 27 20:19:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:19:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4607c} Jan 27 20:19:55.249: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:19:55.249: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:20:04.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:04.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5824c} Jan 27 20:20:14.863: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:14.894: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4648c} Jan 27 20:20:24.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:24.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46704} Jan 27 20:20:25.289: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:20:25.290: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:20:34.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:34.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4699c} Jan 27 20:20:44.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:44.891: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5864c} Jan 27 20:20:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:20:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b588cc} Jan 27 20:20:55.330: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:20:55.330: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:21:04.859: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:04.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af03dc} Jan 27 20:21:14.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:14.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af0494} Jan 27 20:21:24.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:24.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af05c4} Jan 27 20:21:25.369: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:21:25.370: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:21:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:34.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46dc4} Jan 27 20:21:44.859: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:44.891: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46fac} Jan 27 20:21:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:21:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4607c} Jan 27 20:21:55.410: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:21:55.410: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:22:04.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:04.891: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5822c} Jan 27 20:22:14.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:14.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b582e4} Jan 27 20:22:24.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:24.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5839c} Jan 27 20:22:25.452: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:22:25.452: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:22:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:34.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58594} Jan 27 20:22:44.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:44.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4665c} Jan 27 20:22:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:22:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5881c} Jan 27 20:22:55.492: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:22:55.492: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:23:04.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:04.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58a2c} Jan 27 20:23:14.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:14.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af066c} Jan 27 20:23:24.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:24.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46994} Jan 27 20:23:25.533: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:23:25.533: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:23:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:34.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58c74} Jan 27 20:23:44.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:44.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58d3c} Jan 27 20:23:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:23:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b581b4} Jan 27 20:23:55.574: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:23:55.574: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:24:04.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:04.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af024c} Jan 27 20:24:14.859: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:14.891: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c460fc} Jan 27 20:24:24.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:24.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b584ac} Jan 27 20:24:25.615: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:24:25.616: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:24:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:34.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af05dc} Jan 27 20:24:44.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:44.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5879c} Jan 27 20:24:54.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:24:54.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af096c} Jan 27 20:24:55.655: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:24:55.655: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:25:04.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:04.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b58a0c} Jan 27 20:25:14.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:14.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4651c} Jan 27 20:25:25.635: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:25.666: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4677c} Jan 27 20:25:25.695: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:25:25.696: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:25:34.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:34.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af0d9c} Jan 27 20:25:44.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:44.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af0f7c} Jan 27 20:25:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:25:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af01c4} Jan 27 20:25:55.737: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:25:55.737: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:26:04.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:04.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46094} Jan 27 20:26:14.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:14.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af078c} Jan 27 20:26:24.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:24.890: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af098c} Jan 27 20:26:25.776: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:26:25.777: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:26:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:34.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46374} Jan 27 20:26:44.859: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:44.891: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46444} Jan 27 20:26:54.856: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:26:54.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46524} Jan 27 20:26:55.825: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:26:55.825: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:27:04.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:04.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c467ac} Jan 27 20:27:14.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:14.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af127c} Jan 27 20:27:24.858: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:24.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af148c} Jan 27 20:27:25.867: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:27:25.867: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:27:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:34.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5807c} Jan 27 20:27:44.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:44.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c46994} Jan 27 20:27:54.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:27:54.888: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af0094} Jan 27 20:27:55.907: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:27:55.907: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:28:04.860: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:04.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4622c} Jan 27 20:28:14.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:14.893: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003b5822c} Jan 27 20:28:24.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:24.889: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af02fc} Jan 27 20:28:25.947: INFO: RC rc: sending request to consume 325 millicores Jan 27 20:28:25.947: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:28:34.857: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:34.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003af056c} Jan 27 20:28:44.861: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:44.892: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c466ec} Jan 27 20:28:44.924: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 27 20:28:44.955: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-27 20:18:40 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003c4694c} Jan 27 20:28:44.956: INFO: Number of replicas was stable over 10m0s Jan 27 20:28:44.956: INFO: RC rc: consume 10 millicores in total Jan 27 20:28:44.956: INFO: RC rc: setting consumption to 10 millicores in total Jan 27 20:28:44.987: INFO: waiting for 1 replicas (current: 3) Jan 27 20:28:55.991: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:28:55.991: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:29:05.024: INFO: waiting for 1 replicas (current: 3) Jan 27 20:29:25.021: INFO: waiting for 1 replicas (current: 3) Jan 27 20:29:26.030: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:29:26.030: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:29:45.021: INFO: waiting for 1 replicas (current: 3) Jan 27 20:29:56.070: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:29:56.070: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:30:05.019: INFO: waiting for 1 replicas (current: 3) Jan 27 20:30:25.020: INFO: waiting for 1 replicas (current: 3) Jan 27 20:30:26.108: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:30:26.108: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:30:45.019: INFO: waiting for 1 replicas (current: 3) Jan 27 20:30:56.147: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:30:56.147: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:31:05.021: INFO: waiting for 1 replicas (current: 3) Jan 27 20:31:25.019: INFO: waiting for 1 replicas (current: 3) Jan 27 20:31:26.191: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:31:26.191: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:31:45.023: INFO: waiting for 1 replicas (current: 3) Jan 27 20:31:56.229: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:31:56.229: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:32:05.024: INFO: waiting for 1 replicas (current: 3) Jan 27 20:32:25.020: INFO: waiting for 1 replicas (current: 3) Jan 27 20:32:26.272: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:32:26.272: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:32:45.024: INFO: waiting for 1 replicas (current: 3) Jan 27 20:32:56.312: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:32:56.312: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:33:05.024: INFO: waiting for 1 replicas (current: 3) Jan 27 20:33:25.021: INFO: waiting for 1 replicas (current: 3) Jan 27 20:33:26.351: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:33:26.351: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:33:45.024: INFO: waiting for 1 replicas (current: 3) Jan 27 20:33:56.391: INFO: RC rc: sending request to consume 10 millicores Jan 27 20:33:56.392: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8430/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:34:05.020: INFO: waiting for 1 replicas (current: 2) Jan 27 20:34:25.019: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc �[38;5;243m01/27/23 20:34:25.054�[0m Jan 27 20:34:25.054: INFO: RC rc: stopping metric consumer Jan 27 20:34:25.054: INFO: RC rc: stopping mem consumer Jan 27 20:34:25.054: INFO: RC rc: stopping CPU consumer �[1mSTEP:�[0m deleting ReplicationController rc in namespace horizontal-pod-autoscaling-8430, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:34:35.059�[0m Jan 27 20:34:35.181: INFO: Deleting ReplicationController rc took: 38.42465ms Jan 27 20:34:35.282: INFO: Terminating ReplicationController rc pods took: 101.154636ms �[1mSTEP:�[0m deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-8430, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:34:36.642�[0m Jan 27 20:34:36.759: INFO: Deleting ReplicationController rc-ctrl took: 35.211927ms Jan 27 20:34:36.860: INFO: Terminating ReplicationController rc-ctrl pods took: 101.053091ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 20:34:38.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-8430" for this suite. �[38;5;243m01/27/23 20:34:38.65�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] StatefulSet �[38;5;243mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mBurst scaling should run to completion even with unhealthy pods [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/apps/statefulset.go:695�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:34:38.693�[0m Jan 27 20:34:38.693: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/27/23 20:34:38.694�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:34:38.796�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:34:38.857�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-5702 �[38;5;243m01/27/23 20:34:38.92�[0m [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/apps/statefulset.go:695 �[1mSTEP:�[0m Creating stateful set ss in namespace statefulset-5702 �[38;5;243m01/27/23 20:34:38.959�[0m �[1mSTEP:�[0m Waiting until all stateful set ss replicas will be running in namespace statefulset-5702 �[38;5;243m01/27/23 20:34:38.994�[0m Jan 27 20:34:39.029: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 20:34:49.061: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Confirming that stateful set scale up will not halt with unhealthy stateful pod �[38;5;243m01/27/23 20:34:49.061�[0m Jan 27 20:34:49.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:34:49.885: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:34:49.885: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:34:49.885: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:34:49.917: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 27 20:34:59.955: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:34:59.955: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:35:00.098: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999964s Jan 27 20:35:01.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.962917242s Jan 27 20:35:02.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.930142514s Jan 27 20:35:03.198: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.895390563s Jan 27 20:35:04.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.862760389s Jan 27 20:35:05.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.829677962s Jan 27 20:35:06.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.796642828s Jan 27 20:35:07.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.764071599s Jan 27 20:35:08.367: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.727110976s Jan 27 20:35:09.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 693.57594ms �[1mSTEP:�[0m Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5702 �[38;5;243m01/27/23 20:35:10.401�[0m Jan 27 20:35:10.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:35:11.009: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 27 20:35:11.009: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:35:11.010: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:35:11.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:35:11.562: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 27 20:35:11.562: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:35:11.562: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:35:11.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:35:12.094: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 27 20:35:12.094: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:35:12.094: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:35:12.126: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:35:12.126: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:35:12.126: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Scale down will not halt with unhealthy stateful pod �[38;5;243m01/27/23 20:35:12.126�[0m Jan 27 20:35:12.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:35:12.717: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:35:12.717: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:35:12.717: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:35:12.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:35:13.300: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:35:13.300: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:35:13.300: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:35:13.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:35:13.868: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:35:13.868: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:35:13.868: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:35:13.868: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:35:13.903: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 27 20:35:23.971: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:35:23.971: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:35:23.971: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:35:24.123: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 20:35:24.123: INFO: ss-0 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC }] Jan 27 20:35:24.123: INFO: ss-1 capz-conf-7xz7d Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:24.123: INFO: ss-2 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:24.123: INFO: Jan 27 20:35:24.123: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 20:35:25.157: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 20:35:25.157: INFO: ss-0 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC }] Jan 27 20:35:25.157: INFO: ss-1 capz-conf-7xz7d Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:25.157: INFO: ss-2 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:25.157: INFO: Jan 27 20:35:25.157: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 20:35:26.190: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 20:35:26.190: INFO: ss-0 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC }] Jan 27 20:35:26.190: INFO: ss-1 capz-conf-7xz7d Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:26.190: INFO: ss-2 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:26.190: INFO: Jan 27 20:35:26.190: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 20:35:27.224: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 20:35:27.224: INFO: ss-0 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC }] Jan 27 20:35:27.224: INFO: ss-1 capz-conf-7xz7d Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:27.224: INFO: ss-2 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:27.224: INFO: Jan 27 20:35:27.224: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 20:35:28.256: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 20:35:28.256: INFO: ss-0 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC }] Jan 27 20:35:28.257: INFO: ss-1 capz-conf-7xz7d Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:28.257: INFO: Jan 27 20:35:28.257: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 27 20:35:29.289: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.783023616s Jan 27 20:35:30.321: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.750520953s Jan 27 20:35:31.353: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.718633623s Jan 27 20:35:32.387: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.685527653s Jan 27 20:35:33.420: INFO: Verifying statefulset ss doesn't scale past 0 for another 652.779142ms �[1mSTEP:�[0m Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5702 �[38;5;243m01/27/23 20:35:34.421�[0m Jan 27 20:35:34.453: INFO: Scaling statefulset ss to 0 Jan 27 20:35:34.549: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 27 20:35:34.580: INFO: Deleting all statefulset in ns statefulset-5702 Jan 27 20:35:34.612: INFO: Scaling statefulset ss to 0 Jan 27 20:35:34.707: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:35:34.739: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 Jan 27 20:35:34.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "statefulset-5702" for this suite. �[38;5;243m01/27/23 20:35:34.873�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","completed":25,"skipped":2201,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [56.216 seconds]�[0m [sig-apps] StatefulSet �[38;5;243mtest/e2e/apps/framework.go:23�[0m Basic StatefulSet functionality [StatefulSetBasic] �[38;5;243mtest/e2e/apps/statefulset.go:101�[0m Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] �[38;5;243mtest/e2e/apps/statefulset.go:695�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:34:38.693�[0m Jan 27 20:34:38.693: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/27/23 20:34:38.694�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:34:38.796�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:34:38.857�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-5702 �[38;5;243m01/27/23 20:34:38.92�[0m [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/apps/statefulset.go:695 �[1mSTEP:�[0m Creating stateful set ss in namespace statefulset-5702 �[38;5;243m01/27/23 20:34:38.959�[0m �[1mSTEP:�[0m Waiting until all stateful set ss replicas will be running in namespace statefulset-5702 �[38;5;243m01/27/23 20:34:38.994�[0m Jan 27 20:34:39.029: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 20:34:49.061: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Confirming that stateful set scale up will not halt with unhealthy stateful pod �[38;5;243m01/27/23 20:34:49.061�[0m Jan 27 20:34:49.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:34:49.885: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:34:49.885: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:34:49.885: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:34:49.917: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 27 20:34:59.955: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:34:59.955: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:35:00.098: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999964s Jan 27 20:35:01.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.962917242s Jan 27 20:35:02.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.930142514s Jan 27 20:35:03.198: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.895390563s Jan 27 20:35:04.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.862760389s Jan 27 20:35:05.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.829677962s Jan 27 20:35:06.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.796642828s Jan 27 20:35:07.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.764071599s Jan 27 20:35:08.367: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.727110976s Jan 27 20:35:09.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 693.57594ms �[1mSTEP:�[0m Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5702 �[38;5;243m01/27/23 20:35:10.401�[0m Jan 27 20:35:10.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:35:11.009: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 27 20:35:11.009: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:35:11.010: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:35:11.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:35:11.562: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 27 20:35:11.562: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:35:11.562: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:35:11.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:35:12.094: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 27 20:35:12.094: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:35:12.094: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:35:12.126: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:35:12.126: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:35:12.126: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Scale down will not halt with unhealthy stateful pod �[38;5;243m01/27/23 20:35:12.126�[0m Jan 27 20:35:12.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:35:12.717: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:35:12.717: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:35:12.717: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:35:12.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:35:13.300: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:35:13.300: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:35:13.300: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:35:13.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5702 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:35:13.868: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:35:13.868: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:35:13.868: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:35:13.868: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:35:13.903: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 27 20:35:23.971: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:35:23.971: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:35:23.971: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:35:24.123: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 20:35:24.123: INFO: ss-0 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC }] Jan 27 20:35:24.123: INFO: ss-1 capz-conf-7xz7d Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:24.123: INFO: ss-2 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:24.123: INFO: Jan 27 20:35:24.123: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 20:35:25.157: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 20:35:25.157: INFO: ss-0 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC }] Jan 27 20:35:25.157: INFO: ss-1 capz-conf-7xz7d Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:25.157: INFO: ss-2 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:25.157: INFO: Jan 27 20:35:25.157: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 20:35:26.190: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 20:35:26.190: INFO: ss-0 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC }] Jan 27 20:35:26.190: INFO: ss-1 capz-conf-7xz7d Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:26.190: INFO: ss-2 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:26.190: INFO: Jan 27 20:35:26.190: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 20:35:27.224: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 20:35:27.224: INFO: ss-0 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC }] Jan 27 20:35:27.224: INFO: ss-1 capz-conf-7xz7d Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:27.224: INFO: ss-2 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:27.224: INFO: Jan 27 20:35:27.224: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 20:35:28.256: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 20:35:28.256: INFO: ss-0 capz-conf-d9r4r Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:34:39 +0000 UTC }] Jan 27 20:35:28.257: INFO: ss-1 capz-conf-7xz7d Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-27 20:35:00 +0000 UTC }] Jan 27 20:35:28.257: INFO: Jan 27 20:35:28.257: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 27 20:35:29.289: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.783023616s Jan 27 20:35:30.321: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.750520953s Jan 27 20:35:31.353: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.718633623s Jan 27 20:35:32.387: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.685527653s Jan 27 20:35:33.420: INFO: Verifying statefulset ss doesn't scale past 0 for another 652.779142ms �[1mSTEP:�[0m Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5702 �[38;5;243m01/27/23 20:35:34.421�[0m Jan 27 20:35:34.453: INFO: Scaling statefulset ss to 0 Jan 27 20:35:34.549: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 27 20:35:34.580: INFO: Deleting all statefulset in ns statefulset-5702 Jan 27 20:35:34.612: INFO: Scaling statefulset ss to 0 Jan 27 20:35:34.707: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:35:34.739: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 Jan 27 20:35:34.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "statefulset-5702" for this suite. �[38;5;243m01/27/23 20:35:34.873�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould apply changes to a namespace status [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/namespace.go:298�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:35:34.918�[0m Jan 27 20:35:34.918: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/27/23 20:35:34.919�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:35:35.025�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:35:35.087�[0m [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:298 �[1mSTEP:�[0m Read namespace status �[38;5;243m01/27/23 20:35:35.148�[0m Jan 27 20:35:35.180: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} �[1mSTEP:�[0m Patch namespace status �[38;5;243m01/27/23 20:35:35.18�[0m Jan 27 20:35:35.217: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} �[1mSTEP:�[0m Update namespace status �[38;5;243m01/27/23 20:35:35.217�[0m Jan 27 20:35:35.285: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 27 20:35:35.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-1068" for this suite. �[38;5;243m01/27/23 20:35:35.325�[0m {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]","completed":26,"skipped":2330,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [0.442 seconds]�[0m [sig-api-machinery] Namespaces [Serial] �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should apply changes to a namespace status [Conformance] �[38;5;243mtest/e2e/apimachinery/namespace.go:298�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:35:34.918�[0m Jan 27 20:35:34.918: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/27/23 20:35:34.919�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:35:35.025�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:35:35.087�[0m [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:298 �[1mSTEP:�[0m Read namespace status �[38;5;243m01/27/23 20:35:35.148�[0m Jan 27 20:35:35.180: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} �[1mSTEP:�[0m Patch namespace status �[38;5;243m01/27/23 20:35:35.18�[0m Jan 27 20:35:35.217: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} �[1mSTEP:�[0m Update namespace status �[38;5;243m01/27/23 20:35:35.217�[0m Jan 27 20:35:35.285: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 27 20:35:35.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-1068" for this suite. �[38;5;243m01/27/23 20:35:35.325�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould rollback without unnecessary restarts [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:431�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:35:35.365�[0m Jan 27 20:35:35.365: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/27/23 20:35:35.366�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:35:35.466�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:35:35.527�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:431 Jan 27 20:35:35.776: INFO: Create a RollingUpdate DaemonSet Jan 27 20:35:35.813: INFO: Check that daemon pods launch on every node of the cluster Jan 27 20:35:35.855: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:35.886: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:35:35.886: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:35:36.924: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:36.957: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:35:36.957: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:35:37.921: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:37.954: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:35:37.955: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:35:38.921: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:38.954: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:35:38.954: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:35:39.924: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:39.962: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:35:39.962: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:35:40.922: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:40.953: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 27 20:35:40.953: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set Jan 27 20:35:40.953: INFO: Update the DaemonSet to trigger a rollout Jan 27 20:35:41.025: INFO: Updating DaemonSet daemon-set Jan 27 20:35:47.172: INFO: Roll back the DaemonSet before rollout is complete Jan 27 20:35:47.242: INFO: Updating DaemonSet daemon-set Jan 27 20:35:47.242: INFO: Make sure DaemonSet rollback is complete Jan 27 20:35:47.308: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:48.374: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:49.375: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:50.375: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:51.341: INFO: Pod daemon-set-p4jj9 is not available Jan 27 20:35:51.375: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/27/23 20:35:51.443�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-6815, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:35:51.443�[0m Jan 27 20:35:51.567: INFO: Deleting DaemonSet.extensions daemon-set took: 35.4104ms Jan 27 20:35:51.667: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.588854ms Jan 27 20:36:00.400: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:00.400: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 27 20:36:00.431: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"19823"},"items":null} Jan 27 20:36:00.473: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"19823"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 27 20:36:00.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-6815" for this suite. �[38;5;243m01/27/23 20:36:00.602�[0m {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","completed":27,"skipped":2389,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [25.278 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should rollback without unnecessary restarts [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:431�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:35:35.365�[0m Jan 27 20:35:35.365: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/27/23 20:35:35.366�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:35:35.466�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:35:35.527�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:431 Jan 27 20:35:35.776: INFO: Create a RollingUpdate DaemonSet Jan 27 20:35:35.813: INFO: Check that daemon pods launch on every node of the cluster Jan 27 20:35:35.855: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:35.886: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:35:35.886: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:35:36.924: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:36.957: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:35:36.957: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:35:37.921: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:37.954: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:35:37.955: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:35:38.921: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:38.954: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:35:38.954: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:35:39.924: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:39.962: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:35:39.962: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:35:40.922: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:40.953: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 27 20:35:40.953: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set Jan 27 20:35:40.953: INFO: Update the DaemonSet to trigger a rollout Jan 27 20:35:41.025: INFO: Updating DaemonSet daemon-set Jan 27 20:35:47.172: INFO: Roll back the DaemonSet before rollout is complete Jan 27 20:35:47.242: INFO: Updating DaemonSet daemon-set Jan 27 20:35:47.242: INFO: Make sure DaemonSet rollback is complete Jan 27 20:35:47.308: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:48.374: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:49.375: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:50.375: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:35:51.341: INFO: Pod daemon-set-p4jj9 is not available Jan 27 20:35:51.375: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/27/23 20:35:51.443�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-6815, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:35:51.443�[0m Jan 27 20:35:51.567: INFO: Deleting DaemonSet.extensions daemon-set took: 35.4104ms Jan 27 20:35:51.667: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.588854ms Jan 27 20:36:00.400: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:00.400: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 27 20:36:00.431: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"19823"},"items":null} Jan 27 20:36:00.473: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"19823"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 27 20:36:00.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-6815" for this suite. �[38;5;243m01/27/23 20:36:00.602�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould list and delete a collection of DaemonSets [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:822�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:36:00.644�[0m Jan 27 20:36:00.644: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/27/23 20:36:00.645�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:36:00.742�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:36:00.805�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:822 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m01/27/23 20:36:01.001�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/27/23 20:36:01.036�[0m Jan 27 20:36:01.084: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:01.116: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:01.116: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:36:02.150: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:02.183: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:02.183: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:36:03.150: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:03.181: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:03.181: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:36:04.150: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:04.182: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:04.182: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:36:05.150: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:05.182: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:05.182: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:36:06.151: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:06.183: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:36:06.183: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:36:07.150: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:07.182: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 27 20:36:07.182: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m listing all DeamonSets �[38;5;243m01/27/23 20:36:07.213�[0m �[1mSTEP:�[0m DeleteCollection of the DaemonSets �[38;5;243m01/27/23 20:36:07.248�[0m �[1mSTEP:�[0m Verify that ReplicaSets have been deleted �[38;5;243m01/27/23 20:36:07.283�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 Jan 27 20:36:07.383: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"19911"},"items":null} Jan 27 20:36:07.416: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"19911"},"items":[{"metadata":{"name":"daemon-set-b2n56","generateName":"daemon-set-","namespace":"daemonsets-5840","uid":"25312320-6b58-4715-8296-2b83fc565829","resourceVersion":"19908","creationTimestamp":"2023-01-27T20:36:01Z","deletionTimestamp":"2023-01-27T20:36:37Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"dbf7434223485d17b1624f36700eb15476d6c3353c58622710feb9837b5220dd","cni.projectcalico.org/podIP":"192.168.183.142/32","cni.projectcalico.org/podIPs":"192.168.183.142/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a3ab2653-7e0c-44e3-a787-4b56876841bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3ab2653-7e0c-44e3-a787-4b56876841bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.183.142\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-bxqrg","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-bxqrg","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-7xz7d","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-7xz7d"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:01Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:05Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:05Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:01Z"}],"hostIP":"10.1.0.4","podIP":"192.168.183.142","podIPs":[{"ip":"192.168.183.142"}],"startTime":"2023-01-27T20:36:01Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-27T20:36:04Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://b4f1f9feec34c2002a1d864139c77a1a432ebe1fa7c846c21003c91b961d7066","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-c2xkg","generateName":"daemon-set-","namespace":"daemonsets-5840","uid":"3d5409f1-5374-4ef1-9594-c93c1b7026a5","resourceVersion":"19909","creationTimestamp":"2023-01-27T20:36:01Z","deletionTimestamp":"2023-01-27T20:36:37Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"2e8af56423c24b6be2b382daf78fbf891936929df7f7acb2de39f3336d7cbf7c","cni.projectcalico.org/podIP":"192.168.238.86/32","cni.projectcalico.org/podIPs":"192.168.238.86/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a3ab2653-7e0c-44e3-a787-4b56876841bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3ab2653-7e0c-44e3-a787-4b56876841bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.238.86\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-57h8r","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-57h8r","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-d9r4r","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-d9r4r"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:01Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:06Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:06Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:01Z"}],"hostIP":"10.1.0.5","podIP":"192.168.238.86","podIPs":[{"ip":"192.168.238.86"}],"startTime":"2023-01-27T20:36:01Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-27T20:36:06Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://b0cbeeda295131cdf734eac51480b7909dd89c8c0893d4499368fd775ac56d24","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 27 20:36:07.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-5840" for this suite. �[38;5;243m01/27/23 20:36:07.549�[0m {"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","completed":28,"skipped":2397,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [6.939 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should list and delete a collection of DaemonSets [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:822�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:36:00.644�[0m Jan 27 20:36:00.644: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/27/23 20:36:00.645�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:36:00.742�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:36:00.805�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:822 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m01/27/23 20:36:01.001�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/27/23 20:36:01.036�[0m Jan 27 20:36:01.084: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:01.116: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:01.116: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:36:02.150: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:02.183: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:02.183: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:36:03.150: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:03.181: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:03.181: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:36:04.150: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:04.182: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:04.182: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:36:05.150: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:05.182: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:36:05.182: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:36:06.151: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:06.183: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:36:06.183: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:36:07.150: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:36:07.182: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 27 20:36:07.182: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m listing all DeamonSets �[38;5;243m01/27/23 20:36:07.213�[0m �[1mSTEP:�[0m DeleteCollection of the DaemonSets �[38;5;243m01/27/23 20:36:07.248�[0m �[1mSTEP:�[0m Verify that ReplicaSets have been deleted �[38;5;243m01/27/23 20:36:07.283�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 Jan 27 20:36:07.383: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"19911"},"items":null} Jan 27 20:36:07.416: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"19911"},"items":[{"metadata":{"name":"daemon-set-b2n56","generateName":"daemon-set-","namespace":"daemonsets-5840","uid":"25312320-6b58-4715-8296-2b83fc565829","resourceVersion":"19908","creationTimestamp":"2023-01-27T20:36:01Z","deletionTimestamp":"2023-01-27T20:36:37Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"dbf7434223485d17b1624f36700eb15476d6c3353c58622710feb9837b5220dd","cni.projectcalico.org/podIP":"192.168.183.142/32","cni.projectcalico.org/podIPs":"192.168.183.142/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a3ab2653-7e0c-44e3-a787-4b56876841bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3ab2653-7e0c-44e3-a787-4b56876841bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.183.142\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-bxqrg","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-bxqrg","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-7xz7d","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-7xz7d"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:01Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:05Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:05Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:01Z"}],"hostIP":"10.1.0.4","podIP":"192.168.183.142","podIPs":[{"ip":"192.168.183.142"}],"startTime":"2023-01-27T20:36:01Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-27T20:36:04Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://b4f1f9feec34c2002a1d864139c77a1a432ebe1fa7c846c21003c91b961d7066","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-c2xkg","generateName":"daemon-set-","namespace":"daemonsets-5840","uid":"3d5409f1-5374-4ef1-9594-c93c1b7026a5","resourceVersion":"19909","creationTimestamp":"2023-01-27T20:36:01Z","deletionTimestamp":"2023-01-27T20:36:37Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"2e8af56423c24b6be2b382daf78fbf891936929df7f7acb2de39f3336d7cbf7c","cni.projectcalico.org/podIP":"192.168.238.86/32","cni.projectcalico.org/podIPs":"192.168.238.86/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a3ab2653-7e0c-44e3-a787-4b56876841bf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3ab2653-7e0c-44e3-a787-4b56876841bf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-27T20:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.238.86\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-57h8r","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-57h8r","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-d9r4r","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-d9r4r"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:01Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:06Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:06Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-27T20:36:01Z"}],"hostIP":"10.1.0.5","podIP":"192.168.238.86","podIPs":[{"ip":"192.168.238.86"}],"startTime":"2023-01-27T20:36:01Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-27T20:36:06Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://b0cbeeda295131cdf734eac51480b7909dd89c8c0893d4499368fd775ac56d24","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 27 20:36:07.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-5840" for this suite. �[38;5;243m01/27/23 20:36:07.549�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould support cascading deletion of custom resources�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:905�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:36:07.587�[0m Jan 27 20:36:07.588: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:36:07.589�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:36:07.689�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:36:07.75�[0m [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Jan 27 20:36:07.812: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 20:36:10.066: INFO: created owner resource "ownerbfng4" Jan 27 20:36:10.111: INFO: created dependent resource "dependent2n26g" Jan 27 20:36:10.182: INFO: created canary resource "canaryznwnb" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:36:20.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-6336" for this suite. �[38;5;243m01/27/23 20:36:20.458�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","completed":29,"skipped":2430,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [12.908 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should support cascading deletion of custom resources �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:905�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:36:07.587�[0m Jan 27 20:36:07.588: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:36:07.589�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:36:07.689�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:36:07.75�[0m [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Jan 27 20:36:07.812: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 27 20:36:10.066: INFO: created owner resource "ownerbfng4" Jan 27 20:36:10.111: INFO: created dependent resource "dependent2n26g" Jan 27 20:36:10.182: INFO: created canary resource "canaryznwnb" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:36:20.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-6336" for this suite. �[38;5;243m01/27/23 20:36:20.458�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan pods created by rc if delete options say so [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:370�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:36:20.5�[0m Jan 27 20:36:20.500: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:36:20.501�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:36:20.599�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:36:20.661�[0m [It] should orphan pods created by rc if delete options say so [Conformance] test/e2e/apimachinery/garbage_collector.go:370 �[1mSTEP:�[0m create the rc �[38;5;243m01/27/23 20:36:20.76�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m01/27/23 20:36:25.83�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m01/27/23 20:36:25.866�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[38;5;243m01/27/23 20:36:30.899�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 20:37:00.941�[0m Jan 27 20:37:01.044: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 20:37:01.076: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 31.563666ms Jan 27 20:37:01.076: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 20:37:01.076: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 20:37:01.439: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 27 20:37:01.439: INFO: Deleting pod "simpletest.rc-2kp4j" in namespace "gc-6819" Jan 27 20:37:01.486: INFO: Deleting pod "simpletest.rc-4jpzt" in namespace "gc-6819" Jan 27 20:37:01.532: INFO: Deleting pod "simpletest.rc-584wd" in namespace "gc-6819" Jan 27 20:37:01.581: INFO: Deleting pod "simpletest.rc-58g7g" in namespace "gc-6819" Jan 27 20:37:01.631: INFO: Deleting pod "simpletest.rc-5t6dt" in namespace "gc-6819" Jan 27 20:37:01.673: INFO: Deleting pod "simpletest.rc-5trsx" in namespace "gc-6819" Jan 27 20:37:01.718: INFO: Deleting pod "simpletest.rc-6kjxr" in namespace "gc-6819" Jan 27 20:37:01.764: INFO: Deleting pod "simpletest.rc-6wvcl" in namespace "gc-6819" Jan 27 20:37:01.809: INFO: Deleting pod "simpletest.rc-7dmfx" in namespace "gc-6819" Jan 27 20:37:01.851: INFO: Deleting pod "simpletest.rc-7fqdh" in namespace "gc-6819" Jan 27 20:37:01.894: INFO: Deleting pod "simpletest.rc-7sfxz" in namespace "gc-6819" Jan 27 20:37:01.942: INFO: Deleting pod "simpletest.rc-7tgmr" in namespace "gc-6819" Jan 27 20:37:01.984: INFO: Deleting pod "simpletest.rc-7vbds" in namespace "gc-6819" Jan 27 20:37:02.025: INFO: Deleting pod "simpletest.rc-7xzg5" in namespace "gc-6819" Jan 27 20:37:02.083: INFO: Deleting pod "simpletest.rc-8z4qw" in namespace "gc-6819" Jan 27 20:37:02.133: INFO: Deleting pod "simpletest.rc-9qcwh" in namespace "gc-6819" Jan 27 20:37:02.183: INFO: Deleting pod "simpletest.rc-9s9tr" in namespace "gc-6819" Jan 27 20:37:02.227: INFO: Deleting pod "simpletest.rc-9w47x" in namespace "gc-6819" Jan 27 20:37:02.271: INFO: Deleting pod "simpletest.rc-9z95v" in namespace "gc-6819" Jan 27 20:37:02.316: INFO: Deleting pod "simpletest.rc-b78f7" in namespace "gc-6819" Jan 27 20:37:02.358: INFO: Deleting pod "simpletest.rc-bgbk8" in namespace "gc-6819" Jan 27 20:37:02.403: INFO: Deleting pod "simpletest.rc-cbvnr" in namespace "gc-6819" Jan 27 20:37:02.448: INFO: Deleting pod "simpletest.rc-clcjf" in namespace "gc-6819" Jan 27 20:37:02.491: INFO: Deleting pod "simpletest.rc-cmxxz" in namespace "gc-6819" Jan 27 20:37:02.534: INFO: Deleting pod "simpletest.rc-d8292" in namespace "gc-6819" Jan 27 20:37:02.581: INFO: Deleting pod "simpletest.rc-d9fgc" in namespace "gc-6819" Jan 27 20:37:02.624: INFO: Deleting pod "simpletest.rc-d9vqh" in namespace "gc-6819" Jan 27 20:37:02.677: INFO: Deleting pod "simpletest.rc-f6cjs" in namespace "gc-6819" Jan 27 20:37:02.717: INFO: Deleting pod "simpletest.rc-f72gd" in namespace "gc-6819" Jan 27 20:37:02.759: INFO: Deleting pod "simpletest.rc-ffkxh" in namespace "gc-6819" Jan 27 20:37:02.802: INFO: Deleting pod "simpletest.rc-frmxq" in namespace "gc-6819" Jan 27 20:37:02.844: INFO: Deleting pod "simpletest.rc-g2tv2" in namespace "gc-6819" Jan 27 20:37:02.898: INFO: Deleting pod "simpletest.rc-g8k55" in namespace "gc-6819" Jan 27 20:37:02.942: INFO: Deleting pod "simpletest.rc-gdm5w" in namespace "gc-6819" Jan 27 20:37:02.987: INFO: Deleting pod "simpletest.rc-ght6w" in namespace "gc-6819" Jan 27 20:37:03.035: INFO: Deleting pod "simpletest.rc-gmhb8" in namespace "gc-6819" Jan 27 20:37:03.079: INFO: Deleting pod "simpletest.rc-gq8bz" in namespace "gc-6819" Jan 27 20:37:03.124: INFO: Deleting pod "simpletest.rc-gqcg5" in namespace "gc-6819" Jan 27 20:37:03.170: INFO: Deleting pod "simpletest.rc-gz5c6" in namespace "gc-6819" Jan 27 20:37:03.219: INFO: Deleting pod "simpletest.rc-h2gqr" in namespace "gc-6819" Jan 27 20:37:03.261: INFO: Deleting pod "simpletest.rc-hczxm" in namespace "gc-6819" Jan 27 20:37:03.302: INFO: Deleting pod "simpletest.rc-hd2jj" in namespace "gc-6819" Jan 27 20:37:03.341: INFO: Deleting pod "simpletest.rc-hsgr8" in namespace "gc-6819" Jan 27 20:37:03.387: INFO: Deleting pod "simpletest.rc-htvxl" in namespace "gc-6819" Jan 27 20:37:03.427: INFO: Deleting pod "simpletest.rc-jqbg4" in namespace "gc-6819" Jan 27 20:37:03.480: INFO: Deleting pod "simpletest.rc-jqctg" in namespace "gc-6819" Jan 27 20:37:03.520: INFO: Deleting pod "simpletest.rc-js4wl" in namespace "gc-6819" Jan 27 20:37:03.560: INFO: Deleting pod "simpletest.rc-jwtj5" in namespace "gc-6819" Jan 27 20:37:03.603: INFO: Deleting pod "simpletest.rc-k66bl" in namespace "gc-6819" Jan 27 20:37:03.646: INFO: Deleting pod "simpletest.rc-kd89d" in namespace "gc-6819" Jan 27 20:37:03.690: INFO: Deleting pod "simpletest.rc-knd9c" in namespace "gc-6819" Jan 27 20:37:03.732: INFO: Deleting pod "simpletest.rc-krvcc" in namespace "gc-6819" Jan 27 20:37:03.776: INFO: Deleting pod "simpletest.rc-kt927" in namespace "gc-6819" Jan 27 20:37:03.817: INFO: Deleting pod "simpletest.rc-kxzmh" in namespace "gc-6819" Jan 27 20:37:03.863: INFO: Deleting pod "simpletest.rc-lchcd" in namespace "gc-6819" Jan 27 20:37:03.908: INFO: Deleting pod "simpletest.rc-llq2m" in namespace "gc-6819" Jan 27 20:37:03.952: INFO: Deleting pod "simpletest.rc-lmdxw" in namespace "gc-6819" Jan 27 20:37:03.994: INFO: Deleting pod "simpletest.rc-ltmrc" in namespace "gc-6819" Jan 27 20:37:04.034: INFO: Deleting pod "simpletest.rc-mfjn8" in namespace "gc-6819" Jan 27 20:37:04.083: INFO: Deleting pod "simpletest.rc-mxqs8" in namespace "gc-6819" Jan 27 20:37:04.124: INFO: Deleting pod "simpletest.rc-n2tbp" in namespace "gc-6819" Jan 27 20:37:04.165: INFO: Deleting pod "simpletest.rc-n74lh" in namespace "gc-6819" Jan 27 20:37:04.206: INFO: Deleting pod "simpletest.rc-n9nsf" in namespace "gc-6819" Jan 27 20:37:04.251: INFO: Deleting pod "simpletest.rc-nptw2" in namespace "gc-6819" Jan 27 20:37:04.296: INFO: Deleting pod "simpletest.rc-p9mhv" in namespace "gc-6819" Jan 27 20:37:04.341: INFO: Deleting pod "simpletest.rc-pqfcr" in namespace "gc-6819" Jan 27 20:37:04.389: INFO: Deleting pod "simpletest.rc-pqmsp" in namespace "gc-6819" Jan 27 20:37:04.432: INFO: Deleting pod "simpletest.rc-q68sl" in namespace "gc-6819" Jan 27 20:37:04.473: INFO: Deleting pod "simpletest.rc-q6xjv" in namespace "gc-6819" Jan 27 20:37:04.514: INFO: Deleting pod "simpletest.rc-qff57" in namespace "gc-6819" Jan 27 20:37:04.557: INFO: Deleting pod "simpletest.rc-qxfkj" in namespace "gc-6819" Jan 27 20:37:04.605: INFO: Deleting pod "simpletest.rc-r4j8z" in namespace "gc-6819" Jan 27 20:37:04.645: INFO: Deleting pod "simpletest.rc-r5vfv" in namespace "gc-6819" Jan 27 20:37:04.685: INFO: Deleting pod "simpletest.rc-rkbpj" in namespace "gc-6819" Jan 27 20:37:04.724: INFO: Deleting pod "simpletest.rc-rmzfq" in namespace "gc-6819" Jan 27 20:37:04.767: INFO: Deleting pod "simpletest.rc-rw2z6" in namespace "gc-6819" Jan 27 20:37:04.819: INFO: Deleting pod "simpletest.rc-rzdrw" in namespace "gc-6819" Jan 27 20:37:04.864: INFO: Deleting pod "simpletest.rc-s7k9r" in namespace "gc-6819" Jan 27 20:37:04.911: INFO: Deleting pod "simpletest.rc-skm44" in namespace "gc-6819" Jan 27 20:37:04.951: INFO: Deleting pod "simpletest.rc-sp4kw" in namespace "gc-6819" Jan 27 20:37:04.992: INFO: Deleting pod "simpletest.rc-t478t" in namespace "gc-6819" Jan 27 20:37:05.041: INFO: Deleting pod "simpletest.rc-tlqxg" in namespace "gc-6819" Jan 27 20:37:05.081: INFO: Deleting pod "simpletest.rc-w6qm7" in namespace "gc-6819" Jan 27 20:37:05.126: INFO: Deleting pod "simpletest.rc-wdtvs" in namespace "gc-6819" Jan 27 20:37:05.168: INFO: Deleting pod "simpletest.rc-wfzdl" in namespace "gc-6819" Jan 27 20:37:05.218: INFO: Deleting pod "simpletest.rc-wkjpm" in namespace "gc-6819" Jan 27 20:37:05.257: INFO: Deleting pod "simpletest.rc-wm97x" in namespace "gc-6819" Jan 27 20:37:05.309: INFO: Deleting pod "simpletest.rc-wv74h" in namespace "gc-6819" Jan 27 20:37:05.357: INFO: Deleting pod "simpletest.rc-x4fv6" in namespace "gc-6819" Jan 27 20:37:05.401: INFO: Deleting pod "simpletest.rc-xjxzs" in namespace "gc-6819" Jan 27 20:37:05.447: INFO: Deleting pod "simpletest.rc-xn899" in namespace "gc-6819" Jan 27 20:37:05.486: INFO: Deleting pod "simpletest.rc-xpcb7" in namespace "gc-6819" Jan 27 20:37:05.530: INFO: Deleting pod "simpletest.rc-xtk9m" in namespace "gc-6819" Jan 27 20:37:05.570: INFO: Deleting pod "simpletest.rc-zd444" in namespace "gc-6819" Jan 27 20:37:05.609: INFO: Deleting pod "simpletest.rc-zd6wk" in namespace "gc-6819" Jan 27 20:37:05.654: INFO: Deleting pod "simpletest.rc-zdc69" in namespace "gc-6819" Jan 27 20:37:05.696: INFO: Deleting pod "simpletest.rc-zddnl" in namespace "gc-6819" Jan 27 20:37:05.737: INFO: Deleting pod "simpletest.rc-zrvzx" in namespace "gc-6819" Jan 27 20:37:05.783: INFO: Deleting pod "simpletest.rc-zsd6t" in namespace "gc-6819" Jan 27 20:37:05.823: INFO: Deleting pod "simpletest.rc-zskgf" in namespace "gc-6819" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:37:05.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-6819" for this suite. �[38;5;243m01/27/23 20:37:05.904�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","completed":30,"skipped":2494,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [45.443 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should orphan pods created by rc if delete options say so [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:370�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:36:20.5�[0m Jan 27 20:36:20.500: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:36:20.501�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:36:20.599�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:36:20.661�[0m [It] should orphan pods created by rc if delete options say so [Conformance] test/e2e/apimachinery/garbage_collector.go:370 �[1mSTEP:�[0m create the rc �[38;5;243m01/27/23 20:36:20.76�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m01/27/23 20:36:25.83�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m01/27/23 20:36:25.866�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[38;5;243m01/27/23 20:36:30.899�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 20:37:00.941�[0m Jan 27 20:37:01.044: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 20:37:01.076: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 31.563666ms Jan 27 20:37:01.076: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 20:37:01.076: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 20:37:01.439: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 27 20:37:01.439: INFO: Deleting pod "simpletest.rc-2kp4j" in namespace "gc-6819" Jan 27 20:37:01.486: INFO: Deleting pod "simpletest.rc-4jpzt" in namespace "gc-6819" Jan 27 20:37:01.532: INFO: Deleting pod "simpletest.rc-584wd" in namespace "gc-6819" Jan 27 20:37:01.581: INFO: Deleting pod "simpletest.rc-58g7g" in namespace "gc-6819" Jan 27 20:37:01.631: INFO: Deleting pod "simpletest.rc-5t6dt" in namespace "gc-6819" Jan 27 20:37:01.673: INFO: Deleting pod "simpletest.rc-5trsx" in namespace "gc-6819" Jan 27 20:37:01.718: INFO: Deleting pod "simpletest.rc-6kjxr" in namespace "gc-6819" Jan 27 20:37:01.764: INFO: Deleting pod "simpletest.rc-6wvcl" in namespace "gc-6819" Jan 27 20:37:01.809: INFO: Deleting pod "simpletest.rc-7dmfx" in namespace "gc-6819" Jan 27 20:37:01.851: INFO: Deleting pod "simpletest.rc-7fqdh" in namespace "gc-6819" Jan 27 20:37:01.894: INFO: Deleting pod "simpletest.rc-7sfxz" in namespace "gc-6819" Jan 27 20:37:01.942: INFO: Deleting pod "simpletest.rc-7tgmr" in namespace "gc-6819" Jan 27 20:37:01.984: INFO: Deleting pod "simpletest.rc-7vbds" in namespace "gc-6819" Jan 27 20:37:02.025: INFO: Deleting pod "simpletest.rc-7xzg5" in namespace "gc-6819" Jan 27 20:37:02.083: INFO: Deleting pod "simpletest.rc-8z4qw" in namespace "gc-6819" Jan 27 20:37:02.133: INFO: Deleting pod "simpletest.rc-9qcwh" in namespace "gc-6819" Jan 27 20:37:02.183: INFO: Deleting pod "simpletest.rc-9s9tr" in namespace "gc-6819" Jan 27 20:37:02.227: INFO: Deleting pod "simpletest.rc-9w47x" in namespace "gc-6819" Jan 27 20:37:02.271: INFO: Deleting pod "simpletest.rc-9z95v" in namespace "gc-6819" Jan 27 20:37:02.316: INFO: Deleting pod "simpletest.rc-b78f7" in namespace "gc-6819" Jan 27 20:37:02.358: INFO: Deleting pod "simpletest.rc-bgbk8" in namespace "gc-6819" Jan 27 20:37:02.403: INFO: Deleting pod "simpletest.rc-cbvnr" in namespace "gc-6819" Jan 27 20:37:02.448: INFO: Deleting pod "simpletest.rc-clcjf" in namespace "gc-6819" Jan 27 20:37:02.491: INFO: Deleting pod "simpletest.rc-cmxxz" in namespace "gc-6819" Jan 27 20:37:02.534: INFO: Deleting pod "simpletest.rc-d8292" in namespace "gc-6819" Jan 27 20:37:02.581: INFO: Deleting pod "simpletest.rc-d9fgc" in namespace "gc-6819" Jan 27 20:37:02.624: INFO: Deleting pod "simpletest.rc-d9vqh" in namespace "gc-6819" Jan 27 20:37:02.677: INFO: Deleting pod "simpletest.rc-f6cjs" in namespace "gc-6819" Jan 27 20:37:02.717: INFO: Deleting pod "simpletest.rc-f72gd" in namespace "gc-6819" Jan 27 20:37:02.759: INFO: Deleting pod "simpletest.rc-ffkxh" in namespace "gc-6819" Jan 27 20:37:02.802: INFO: Deleting pod "simpletest.rc-frmxq" in namespace "gc-6819" Jan 27 20:37:02.844: INFO: Deleting pod "simpletest.rc-g2tv2" in namespace "gc-6819" Jan 27 20:37:02.898: INFO: Deleting pod "simpletest.rc-g8k55" in namespace "gc-6819" Jan 27 20:37:02.942: INFO: Deleting pod "simpletest.rc-gdm5w" in namespace "gc-6819" Jan 27 20:37:02.987: INFO: Deleting pod "simpletest.rc-ght6w" in namespace "gc-6819" Jan 27 20:37:03.035: INFO: Deleting pod "simpletest.rc-gmhb8" in namespace "gc-6819" Jan 27 20:37:03.079: INFO: Deleting pod "simpletest.rc-gq8bz" in namespace "gc-6819" Jan 27 20:37:03.124: INFO: Deleting pod "simpletest.rc-gqcg5" in namespace "gc-6819" Jan 27 20:37:03.170: INFO: Deleting pod "simpletest.rc-gz5c6" in namespace "gc-6819" Jan 27 20:37:03.219: INFO: Deleting pod "simpletest.rc-h2gqr" in namespace "gc-6819" Jan 27 20:37:03.261: INFO: Deleting pod "simpletest.rc-hczxm" in namespace "gc-6819" Jan 27 20:37:03.302: INFO: Deleting pod "simpletest.rc-hd2jj" in namespace "gc-6819" Jan 27 20:37:03.341: INFO: Deleting pod "simpletest.rc-hsgr8" in namespace "gc-6819" Jan 27 20:37:03.387: INFO: Deleting pod "simpletest.rc-htvxl" in namespace "gc-6819" Jan 27 20:37:03.427: INFO: Deleting pod "simpletest.rc-jqbg4" in namespace "gc-6819" Jan 27 20:37:03.480: INFO: Deleting pod "simpletest.rc-jqctg" in namespace "gc-6819" Jan 27 20:37:03.520: INFO: Deleting pod "simpletest.rc-js4wl" in namespace "gc-6819" Jan 27 20:37:03.560: INFO: Deleting pod "simpletest.rc-jwtj5" in namespace "gc-6819" Jan 27 20:37:03.603: INFO: Deleting pod "simpletest.rc-k66bl" in namespace "gc-6819" Jan 27 20:37:03.646: INFO: Deleting pod "simpletest.rc-kd89d" in namespace "gc-6819" Jan 27 20:37:03.690: INFO: Deleting pod "simpletest.rc-knd9c" in namespace "gc-6819" Jan 27 20:37:03.732: INFO: Deleting pod "simpletest.rc-krvcc" in namespace "gc-6819" Jan 27 20:37:03.776: INFO: Deleting pod "simpletest.rc-kt927" in namespace "gc-6819" Jan 27 20:37:03.817: INFO: Deleting pod "simpletest.rc-kxzmh" in namespace "gc-6819" Jan 27 20:37:03.863: INFO: Deleting pod "simpletest.rc-lchcd" in namespace "gc-6819" Jan 27 20:37:03.908: INFO: Deleting pod "simpletest.rc-llq2m" in namespace "gc-6819" Jan 27 20:37:03.952: INFO: Deleting pod "simpletest.rc-lmdxw" in namespace "gc-6819" Jan 27 20:37:03.994: INFO: Deleting pod "simpletest.rc-ltmrc" in namespace "gc-6819" Jan 27 20:37:04.034: INFO: Deleting pod "simpletest.rc-mfjn8" in namespace "gc-6819" Jan 27 20:37:04.083: INFO: Deleting pod "simpletest.rc-mxqs8" in namespace "gc-6819" Jan 27 20:37:04.124: INFO: Deleting pod "simpletest.rc-n2tbp" in namespace "gc-6819" Jan 27 20:37:04.165: INFO: Deleting pod "simpletest.rc-n74lh" in namespace "gc-6819" Jan 27 20:37:04.206: INFO: Deleting pod "simpletest.rc-n9nsf" in namespace "gc-6819" Jan 27 20:37:04.251: INFO: Deleting pod "simpletest.rc-nptw2" in namespace "gc-6819" Jan 27 20:37:04.296: INFO: Deleting pod "simpletest.rc-p9mhv" in namespace "gc-6819" Jan 27 20:37:04.341: INFO: Deleting pod "simpletest.rc-pqfcr" in namespace "gc-6819" Jan 27 20:37:04.389: INFO: Deleting pod "simpletest.rc-pqmsp" in namespace "gc-6819" Jan 27 20:37:04.432: INFO: Deleting pod "simpletest.rc-q68sl" in namespace "gc-6819" Jan 27 20:37:04.473: INFO: Deleting pod "simpletest.rc-q6xjv" in namespace "gc-6819" Jan 27 20:37:04.514: INFO: Deleting pod "simpletest.rc-qff57" in namespace "gc-6819" Jan 27 20:37:04.557: INFO: Deleting pod "simpletest.rc-qxfkj" in namespace "gc-6819" Jan 27 20:37:04.605: INFO: Deleting pod "simpletest.rc-r4j8z" in namespace "gc-6819" Jan 27 20:37:04.645: INFO: Deleting pod "simpletest.rc-r5vfv" in namespace "gc-6819" Jan 27 20:37:04.685: INFO: Deleting pod "simpletest.rc-rkbpj" in namespace "gc-6819" Jan 27 20:37:04.724: INFO: Deleting pod "simpletest.rc-rmzfq" in namespace "gc-6819" Jan 27 20:37:04.767: INFO: Deleting pod "simpletest.rc-rw2z6" in namespace "gc-6819" Jan 27 20:37:04.819: INFO: Deleting pod "simpletest.rc-rzdrw" in namespace "gc-6819" Jan 27 20:37:04.864: INFO: Deleting pod "simpletest.rc-s7k9r" in namespace "gc-6819" Jan 27 20:37:04.911: INFO: Deleting pod "simpletest.rc-skm44" in namespace "gc-6819" Jan 27 20:37:04.951: INFO: Deleting pod "simpletest.rc-sp4kw" in namespace "gc-6819" Jan 27 20:37:04.992: INFO: Deleting pod "simpletest.rc-t478t" in namespace "gc-6819" Jan 27 20:37:05.041: INFO: Deleting pod "simpletest.rc-tlqxg" in namespace "gc-6819" Jan 27 20:37:05.081: INFO: Deleting pod "simpletest.rc-w6qm7" in namespace "gc-6819" Jan 27 20:37:05.126: INFO: Deleting pod "simpletest.rc-wdtvs" in namespace "gc-6819" Jan 27 20:37:05.168: INFO: Deleting pod "simpletest.rc-wfzdl" in namespace "gc-6819" Jan 27 20:37:05.218: INFO: Deleting pod "simpletest.rc-wkjpm" in namespace "gc-6819" Jan 27 20:37:05.257: INFO: Deleting pod "simpletest.rc-wm97x" in namespace "gc-6819" Jan 27 20:37:05.309: INFO: Deleting pod "simpletest.rc-wv74h" in namespace "gc-6819" Jan 27 20:37:05.357: INFO: Deleting pod "simpletest.rc-x4fv6" in namespace "gc-6819" Jan 27 20:37:05.401: INFO: Deleting pod "simpletest.rc-xjxzs" in namespace "gc-6819" Jan 27 20:37:05.447: INFO: Deleting pod "simpletest.rc-xn899" in namespace "gc-6819" Jan 27 20:37:05.486: INFO: Deleting pod "simpletest.rc-xpcb7" in namespace "gc-6819" Jan 27 20:37:05.530: INFO: Deleting pod "simpletest.rc-xtk9m" in namespace "gc-6819" Jan 27 20:37:05.570: INFO: Deleting pod "simpletest.rc-zd444" in namespace "gc-6819" Jan 27 20:37:05.609: INFO: Deleting pod "simpletest.rc-zd6wk" in namespace "gc-6819" Jan 27 20:37:05.654: INFO: Deleting pod "simpletest.rc-zdc69" in namespace "gc-6819" Jan 27 20:37:05.696: INFO: Deleting pod "simpletest.rc-zddnl" in namespace "gc-6819" Jan 27 20:37:05.737: INFO: Deleting pod "simpletest.rc-zrvzx" in namespace "gc-6819" Jan 27 20:37:05.783: INFO: Deleting pod "simpletest.rc-zsd6t" in namespace "gc-6819" Jan 27 20:37:05.823: INFO: Deleting pod "simpletest.rc-zskgf" in namespace "gc-6819" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:37:05.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-6819" for this suite. �[38;5;243m01/27/23 20:37:05.904�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould retry creating failed daemon pods [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:293�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:37:05.944�[0m Jan 27 20:37:05.945: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/27/23 20:37:05.946�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:37:06.049�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:37:06.11�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:293 �[1mSTEP:�[0m Creating a simple DaemonSet "daemon-set" �[38;5;243m01/27/23 20:37:06.309�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/27/23 20:37:06.345�[0m Jan 27 20:37:06.390: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:06.424: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:06.424: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:07.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:07.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:07.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:08.473: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:08.508: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:08.508: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:09.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:09.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:09.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:10.462: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:10.493: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:10.493: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:11.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:11.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:11.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:12.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:12.493: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:12.493: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:13.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:13.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:13.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:14.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:14.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:14.489: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:15.463: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:15.495: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:15.495: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:16.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:16.520: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:16.520: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:17.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:17.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:17.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:18.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:18.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:18.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:19.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:19.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:19.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:20.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:20.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:20.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:21.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:21.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:21.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:22.463: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:22.495: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:22.495: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:23.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:23.498: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:23.498: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:24.460: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:24.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:24.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:25.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:25.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:25.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:26.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:26.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:26.489: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:27.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:27.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:27.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:28.460: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:28.492: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:28.492: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:29.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:29.497: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:29.497: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:30.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:30.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:30.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:31.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:31.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:31.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:32.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:32.518: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:32.518: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:33.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:33.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:33.489: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:34.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:34.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:34.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:35.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:35.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:35.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:36.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:36.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:36.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:37.481: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:37.520: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:37.520: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:38.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:38.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:38.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:39.460: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:39.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:39.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:40.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:40.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:40.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:41.462: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:41.493: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:41.493: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:42.461: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:42.494: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:42.494: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:43.461: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:43.526: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:43.526: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:44.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:44.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:44.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:45.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:45.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:45.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:46.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:46.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:46.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:47.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:47.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:47.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:48.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:48.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:48.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:49.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:49.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:49.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:50.462: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:50.493: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:50.493: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:51.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:51.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:51.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:52.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:52.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:52.489: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:53.462: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:53.494: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:53.494: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:54.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:54.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:54.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:55.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:55.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:55.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:56.460: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:56.501: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:56.501: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:57.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:57.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:57.489: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:58.462: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:58.498: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:58.498: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:59.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:59.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:59.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:38:00.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:00.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 27 20:38:00.490: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. �[38;5;243m01/27/23 20:38:00.522�[0m Jan 27 20:38:00.631: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:00.667: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:38:00.667: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:38:01.702: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:01.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:38:01.734: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:38:02.700: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:02.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:38:02.734: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:38:03.702: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:03.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:38:03.734: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:38:04.702: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:04.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:38:04.734: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:38:05.701: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:05.733: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 27 20:38:05.733: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Wait for the failed daemon pod to be completely deleted. �[38;5;243m01/27/23 20:38:05.733�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/27/23 20:38:05.797�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-9049, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:38:05.797�[0m Jan 27 20:38:05.913: INFO: Deleting DaemonSet.extensions daemon-set took: 34.671701ms Jan 27 20:38:06.014: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.450465ms Jan 27 20:38:21.746: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:38:21.746: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 27 20:38:21.777: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"22219"},"items":null} Jan 27 20:38:21.808: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"22219"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 27 20:38:21.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-9049" for this suite. �[38;5;243m01/27/23 20:38:21.938�[0m {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","completed":31,"skipped":2510,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [76.029 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should retry creating failed daemon pods [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:293�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:37:05.944�[0m Jan 27 20:37:05.945: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/27/23 20:37:05.946�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:37:06.049�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:37:06.11�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:293 �[1mSTEP:�[0m Creating a simple DaemonSet "daemon-set" �[38;5;243m01/27/23 20:37:06.309�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/27/23 20:37:06.345�[0m Jan 27 20:37:06.390: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:06.424: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:06.424: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:07.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:07.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:07.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:08.473: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:08.508: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:08.508: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:09.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:09.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:09.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:10.462: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:10.493: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:10.493: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:11.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:11.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:11.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:12.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:12.493: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:12.493: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:13.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:13.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:13.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:14.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:14.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:14.489: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:15.463: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:15.495: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:15.495: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:16.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:16.520: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:16.520: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:17.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:17.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:17.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:18.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:18.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:18.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:19.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:19.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:19.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:20.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:20.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:20.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:21.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:21.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:21.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:22.463: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:22.495: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:22.495: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:23.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:23.498: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:23.498: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:24.460: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:24.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:24.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:25.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:25.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:25.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:26.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:26.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:26.489: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:27.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:27.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:27.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:28.460: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:28.492: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:28.492: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:29.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:29.497: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:29.497: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:30.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:30.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:30.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:31.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:31.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:31.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:32.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:32.518: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:32.518: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:33.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:33.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:33.489: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:34.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:34.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:34.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:35.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:35.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:35.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:36.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:36.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:36.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:37.481: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:37.520: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:37.520: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:38.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:38.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:38.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:39.460: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:39.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:39.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:40.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:40.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:40.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:41.462: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:41.493: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:41.493: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:42.461: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:42.494: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:37:42.494: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:43.461: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:43.526: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:43.526: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:44.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:44.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:44.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:45.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:45.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:45.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:46.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:46.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:46.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:47.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:47.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:47.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:48.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:48.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:48.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:49.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:49.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:49.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:50.462: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:50.493: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:50.493: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:51.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:51.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:51.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:52.457: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:52.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:52.489: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:53.462: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:53.494: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:53.494: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:54.459: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:54.491: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:54.491: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:55.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:55.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:55.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:56.460: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:56.501: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:56.501: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:57.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:57.489: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:57.489: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:58.462: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:58.498: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:58.498: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:37:59.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:37:59.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:37:59.490: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:38:00.458: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:00.490: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 27 20:38:00.490: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. �[38;5;243m01/27/23 20:38:00.522�[0m Jan 27 20:38:00.631: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:00.667: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:38:00.667: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:38:01.702: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:01.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:38:01.734: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:38:02.700: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:02.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:38:02.734: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:38:03.702: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:03.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:38:03.734: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:38:04.702: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:04.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 27 20:38:04.734: INFO: Node capz-conf-d9r4r is running 0 daemon pod, expected 1 Jan 27 20:38:05.701: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:38:05.733: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 27 20:38:05.733: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Wait for the failed daemon pod to be completely deleted. �[38;5;243m01/27/23 20:38:05.733�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/27/23 20:38:05.797�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-9049, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:38:05.797�[0m Jan 27 20:38:05.913: INFO: Deleting DaemonSet.extensions daemon-set took: 34.671701ms Jan 27 20:38:06.014: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.450465ms Jan 27 20:38:21.746: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 27 20:38:21.746: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 27 20:38:21.777: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"22219"},"items":null} Jan 27 20:38:21.808: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"22219"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 27 20:38:21.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-9049" for this suite. �[38;5;243m01/27/23 20:38:21.938�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete jobs and pods created by cronjob�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:1145�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:38:21.976�[0m Jan 27 20:38:21.976: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:38:21.978�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:38:22.074�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:38:22.136�[0m [It] should delete jobs and pods created by cronjob test/e2e/apimachinery/garbage_collector.go:1145 �[1mSTEP:�[0m Create the cronjob �[38;5;243m01/27/23 20:38:22.198�[0m �[1mSTEP:�[0m Wait for the CronJob to create new Job �[38;5;243m01/27/23 20:38:22.233�[0m �[1mSTEP:�[0m Delete the cronjob �[38;5;243m01/27/23 20:39:00.3�[0m �[1mSTEP:�[0m Verify if cronjob does not leave jobs nor pods behind �[38;5;243m01/27/23 20:39:00.335�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 20:39:00.432�[0m Jan 27 20:39:00.542: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 20:39:00.574: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 31.950198ms Jan 27 20:39:00.574: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 20:39:00.574: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 20:39:00.928: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:39:00.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-4404" for this suite. �[38;5;243m01/27/23 20:39:00.962�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","completed":32,"skipped":2529,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [39.021 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should delete jobs and pods created by cronjob �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:1145�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:38:21.976�[0m Jan 27 20:38:21.976: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:38:21.978�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:38:22.074�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:38:22.136�[0m [It] should delete jobs and pods created by cronjob test/e2e/apimachinery/garbage_collector.go:1145 �[1mSTEP:�[0m Create the cronjob �[38;5;243m01/27/23 20:38:22.198�[0m �[1mSTEP:�[0m Wait for the CronJob to create new Job �[38;5;243m01/27/23 20:38:22.233�[0m �[1mSTEP:�[0m Delete the cronjob �[38;5;243m01/27/23 20:39:00.3�[0m �[1mSTEP:�[0m Verify if cronjob does not leave jobs nor pods behind �[38;5;243m01/27/23 20:39:00.335�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 20:39:00.432�[0m Jan 27 20:39:00.542: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 20:39:00.574: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 31.950198ms Jan 27 20:39:00.574: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 20:39:00.574: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 20:39:00.928: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:39:00.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-4404" for this suite. �[38;5;243m01/27/23 20:39:00.962�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] Deployment�[0m �[1mShould scale from 1 pod to 3 pods and from 3 to 5�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:40�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:39:01.006�[0m Jan 27 20:39:01.006: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:39:01.007�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:39:01.104�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:39:01.165�[0m [It] Should scale from 1 pod to 3 pods and from 3 to 5 test/e2e/autoscaling/horizontal_pod_autoscaling.go:40 �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m01/27/23 20:39:01.227�[0m �[1mSTEP:�[0m creating deployment test-deployment in namespace horizontal-pod-autoscaling-2181 �[38;5;243m01/27/23 20:39:01.274�[0m I0127 20:39:01.309966 13 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-2181, replica count: 1 I0127 20:39:11.361227 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:39:11.361�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-2181 �[38;5;243m01/27/23 20:39:11.404�[0m I0127 20:39:11.441863 13 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-2181, replica count: 1 I0127 20:39:21.493911 13 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:39:26.494: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Jan 27 20:39:26.526: INFO: RC test-deployment: consume 250 millicores in total Jan 27 20:39:26.526: INFO: RC test-deployment: consume 0 MB in total Jan 27 20:39:26.526: INFO: RC test-deployment: disabling mem consumption Jan 27 20:39:26.526: INFO: RC test-deployment: setting consumption to 250 millicores in total Jan 27 20:39:26.526: INFO: RC test-deployment: consume custom metric 0 in total Jan 27 20:39:26.526: INFO: RC test-deployment: disabling consumption of custom metric QPS Jan 27 20:39:26.593: INFO: waiting for 3 replicas (current: 1) Jan 27 20:39:46.625: INFO: waiting for 3 replicas (current: 1) Jan 27 20:39:56.526: INFO: RC test-deployment: sending request to consume 250 millicores Jan 27 20:39:56.526: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2181/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 20:40:06.626: INFO: waiting for 3 replicas (current: 1) Jan 27 20:40:26.590: INFO: RC test-deployment: sending request to consume 250 millicores Jan 27 20:40:26.590: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2181/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 20:40:26.625: INFO: waiting for 3 replicas (current: 1) Jan 27 20:40:46.626: INFO: waiting for 3 replicas (current: 3) Jan 27 20:40:46.626: INFO: RC test-deployment: consume 700 millicores in total Jan 27 20:40:46.627: INFO: RC test-deployment: setting consumption to 700 millicores in total Jan 27 20:40:46.658: INFO: waiting for 5 replicas (current: 3) Jan 27 20:40:56.631: INFO: RC test-deployment: sending request to consume 700 millicores Jan 27 20:40:56.631: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2181/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 27 20:41:06.691: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m01/27/23 20:41:06.727�[0m Jan 27 20:41:06.727: INFO: RC test-deployment: stopping metric consumer Jan 27 20:41:06.727: INFO: RC test-deployment: stopping CPU consumer Jan 27 20:41:06.727: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-2181, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:41:16.729�[0m Jan 27 20:41:16.848: INFO: Deleting Deployment.apps test-deployment took: 36.043449ms Jan 27 20:41:16.949: INFO: Terminating Deployment.apps test-deployment pods took: 100.666299ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-2181, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:41:19.503�[0m Jan 27 20:41:19.621: INFO: Deleting ReplicationController test-deployment-ctrl took: 35.21578ms Jan 27 20:41:19.721: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.667686ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 20:41:21.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2181" for this suite. �[38;5;243m01/27/23 20:41:21.212�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5","completed":33,"skipped":2655,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [140.240 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:38�[0m Should scale from 1 pod to 3 pods and from 3 to 5 �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:40�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:39:01.006�[0m Jan 27 20:39:01.006: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:39:01.007�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:39:01.104�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:39:01.165�[0m [It] Should scale from 1 pod to 3 pods and from 3 to 5 test/e2e/autoscaling/horizontal_pod_autoscaling.go:40 �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m01/27/23 20:39:01.227�[0m �[1mSTEP:�[0m creating deployment test-deployment in namespace horizontal-pod-autoscaling-2181 �[38;5;243m01/27/23 20:39:01.274�[0m I0127 20:39:01.309966 13 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-2181, replica count: 1 I0127 20:39:11.361227 13 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:39:11.361�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-2181 �[38;5;243m01/27/23 20:39:11.404�[0m I0127 20:39:11.441863 13 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-2181, replica count: 1 I0127 20:39:21.493911 13 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:39:26.494: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Jan 27 20:39:26.526: INFO: RC test-deployment: consume 250 millicores in total Jan 27 20:39:26.526: INFO: RC test-deployment: consume 0 MB in total Jan 27 20:39:26.526: INFO: RC test-deployment: disabling mem consumption Jan 27 20:39:26.526: INFO: RC test-deployment: setting consumption to 250 millicores in total Jan 27 20:39:26.526: INFO: RC test-deployment: consume custom metric 0 in total Jan 27 20:39:26.526: INFO: RC test-deployment: disabling consumption of custom metric QPS Jan 27 20:39:26.593: INFO: waiting for 3 replicas (current: 1) Jan 27 20:39:46.625: INFO: waiting for 3 replicas (current: 1) Jan 27 20:39:56.526: INFO: RC test-deployment: sending request to consume 250 millicores Jan 27 20:39:56.526: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2181/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 20:40:06.626: INFO: waiting for 3 replicas (current: 1) Jan 27 20:40:26.590: INFO: RC test-deployment: sending request to consume 250 millicores Jan 27 20:40:26.590: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2181/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 27 20:40:26.625: INFO: waiting for 3 replicas (current: 1) Jan 27 20:40:46.626: INFO: waiting for 3 replicas (current: 3) Jan 27 20:40:46.626: INFO: RC test-deployment: consume 700 millicores in total Jan 27 20:40:46.627: INFO: RC test-deployment: setting consumption to 700 millicores in total Jan 27 20:40:46.658: INFO: waiting for 5 replicas (current: 3) Jan 27 20:40:56.631: INFO: RC test-deployment: sending request to consume 700 millicores Jan 27 20:40:56.631: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2181/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 27 20:41:06.691: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m01/27/23 20:41:06.727�[0m Jan 27 20:41:06.727: INFO: RC test-deployment: stopping metric consumer Jan 27 20:41:06.727: INFO: RC test-deployment: stopping CPU consumer Jan 27 20:41:06.727: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-2181, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:41:16.729�[0m Jan 27 20:41:16.848: INFO: Deleting Deployment.apps test-deployment took: 36.043449ms Jan 27 20:41:16.949: INFO: Terminating Deployment.apps test-deployment pods took: 100.666299ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-2181, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:41:19.503�[0m Jan 27 20:41:19.621: INFO: Deleting ReplicationController test-deployment-ctrl took: 35.21578ms Jan 27 20:41:19.721: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.667686ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 20:41:21.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2181" for this suite. �[38;5;243m01/27/23 20:41:21.212�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith scale limited by percentage�[0m �[1mshould scale down no more than given percentage of current Pods per minute�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:339�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:41:21.256�[0m Jan 27 20:41:21.257: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:41:21.258�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:41:21.361�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:41:21.422�[0m [It] should scale down no more than given percentage of current Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:339 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/27/23 20:41:21.483�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 8 replicas �[38;5;243m01/27/23 20:41:21.483�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-1494 �[38;5;243m01/27/23 20:41:21.529�[0m I0127 20:41:21.564663 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-1494, replica count: 8 I0127 20:41:31.616728 13 runners.go:193] consumer Pods: 8 out of 8 created, 8 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:41:31.616�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-1494 �[38;5;243m01/27/23 20:41:31.663�[0m I0127 20:41:31.698838 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-1494, replica count: 1 I0127 20:41:41.750234 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:41:46.751: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 27 20:41:46.782: INFO: RC consumer: consume 880 millicores in total Jan 27 20:41:46.783: INFO: RC consumer: setting consumption to 880 millicores in total Jan 27 20:41:46.783: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:41:46.783: INFO: RC consumer: consume 0 MB in total Jan 27 20:41:46.783: INFO: RC consumer: consume custom metric 0 in total Jan 27 20:41:46.783: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1494/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:41:46.784: INFO: RC consumer: disabling consumption of custom metric QPS Jan 27 20:41:46.784: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m01/27/23 20:41:46.82�[0m Jan 27 20:41:46.820: INFO: RC consumer: consume 110 millicores in total Jan 27 20:41:49.845: INFO: RC consumer: setting consumption to 110 millicores in total Jan 27 20:41:49.877: INFO: waiting for 4 replicas (current: 8) Jan 27 20:42:09.912: INFO: waiting for 4 replicas (current: 8) Jan 27 20:42:19.848: INFO: RC consumer: sending request to consume 110 millicores Jan 27 20:42:19.848: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1494/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 20:42:29.914: INFO: waiting for 4 replicas (current: 7) Jan 27 20:42:49.890: INFO: RC consumer: sending request to consume 110 millicores Jan 27 20:42:49.890: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1494/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 20:42:49.911: INFO: waiting for 4 replicas (current: 4) Jan 27 20:42:49.943: INFO: waiting for 2 replicas (current: 4) Jan 27 20:43:09.980: INFO: waiting for 2 replicas (current: 4) Jan 27 20:43:19.932: INFO: RC consumer: sending request to consume 110 millicores Jan 27 20:43:19.932: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1494/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 20:43:29.977: INFO: waiting for 2 replicas (current: 3) Jan 27 20:43:49.973: INFO: RC consumer: sending request to consume 110 millicores Jan 27 20:43:49.973: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1494/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 20:43:49.979: INFO: waiting for 2 replicas (current: 2) �[1mSTEP:�[0m verifying time waited for a scale down to 4 replicas �[38;5;243m01/27/23 20:43:49.979�[0m �[1mSTEP:�[0m verifying time waited for a scale down to 2 replicas �[38;5;243m01/27/23 20:43:49.979�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/27/23 20:43:50.014�[0m Jan 27 20:43:50.014: INFO: RC consumer: stopping metric consumer Jan 27 20:43:50.014: INFO: RC consumer: stopping mem consumer Jan 27 20:43:50.014: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-1494, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:44:00.018�[0m Jan 27 20:44:00.139: INFO: Deleting Deployment.apps consumer took: 36.358666ms Jan 27 20:44:00.240: INFO: Terminating Deployment.apps consumer pods took: 100.892648ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-1494, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:44:01.976�[0m Jan 27 20:44:02.098: INFO: Deleting ReplicationController consumer-ctrl took: 34.776654ms Jan 27 20:44:02.198: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.561378ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 27 20:44:03.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1494" for this suite. �[38;5;243m01/27/23 20:44:03.694�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute","completed":34,"skipped":2813,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [162.473 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with scale limited by percentage �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:296�[0m should scale down no more than given percentage of current Pods per minute �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:339�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:41:21.256�[0m Jan 27 20:41:21.257: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:41:21.258�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:41:21.361�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:41:21.422�[0m [It] should scale down no more than given percentage of current Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:339 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/27/23 20:41:21.483�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 8 replicas �[38;5;243m01/27/23 20:41:21.483�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-1494 �[38;5;243m01/27/23 20:41:21.529�[0m I0127 20:41:21.564663 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-1494, replica count: 8 I0127 20:41:31.616728 13 runners.go:193] consumer Pods: 8 out of 8 created, 8 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:41:31.616�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-1494 �[38;5;243m01/27/23 20:41:31.663�[0m I0127 20:41:31.698838 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-1494, replica count: 1 I0127 20:41:41.750234 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:41:46.751: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 27 20:41:46.782: INFO: RC consumer: consume 880 millicores in total Jan 27 20:41:46.783: INFO: RC consumer: setting consumption to 880 millicores in total Jan 27 20:41:46.783: INFO: RC consumer: sending request to consume 880 millicores Jan 27 20:41:46.783: INFO: RC consumer: consume 0 MB in total Jan 27 20:41:46.783: INFO: RC consumer: consume custom metric 0 in total Jan 27 20:41:46.783: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1494/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 27 20:41:46.784: INFO: RC consumer: disabling consumption of custom metric QPS Jan 27 20:41:46.784: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m01/27/23 20:41:46.82�[0m Jan 27 20:41:46.820: INFO: RC consumer: consume 110 millicores in total Jan 27 20:41:49.845: INFO: RC consumer: setting consumption to 110 millicores in total Jan 27 20:41:49.877: INFO: waiting for 4 replicas (current: 8) Jan 27 20:42:09.912: INFO: waiting for 4 replicas (current: 8) Jan 27 20:42:19.848: INFO: RC consumer: sending request to consume 110 millicores Jan 27 20:42:19.848: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1494/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 20:42:29.914: INFO: waiting for 4 replicas (current: 7) Jan 27 20:42:49.890: INFO: RC consumer: sending request to consume 110 millicores Jan 27 20:42:49.890: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1494/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 20:42:49.911: INFO: waiting for 4 replicas (current: 4) Jan 27 20:42:49.943: INFO: waiting for 2 replicas (current: 4) Jan 27 20:43:09.980: INFO: waiting for 2 replicas (current: 4) Jan 27 20:43:19.932: INFO: RC consumer: sending request to consume 110 millicores Jan 27 20:43:19.932: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1494/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 20:43:29.977: INFO: waiting for 2 replicas (current: 3) Jan 27 20:43:49.973: INFO: RC consumer: sending request to consume 110 millicores Jan 27 20:43:49.973: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1494/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 20:43:49.979: INFO: waiting for 2 replicas (current: 2) �[1mSTEP:�[0m verifying time waited for a scale down to 4 replicas �[38;5;243m01/27/23 20:43:49.979�[0m �[1mSTEP:�[0m verifying time waited for a scale down to 2 replicas �[38;5;243m01/27/23 20:43:49.979�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/27/23 20:43:50.014�[0m Jan 27 20:43:50.014: INFO: RC consumer: stopping metric consumer Jan 27 20:43:50.014: INFO: RC consumer: stopping mem consumer Jan 27 20:43:50.014: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-1494, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:44:00.018�[0m Jan 27 20:44:00.139: INFO: Deleting Deployment.apps consumer took: 36.358666ms Jan 27 20:44:00.240: INFO: Terminating Deployment.apps consumer pods took: 100.892648ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-1494, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:44:01.976�[0m Jan 27 20:44:02.098: INFO: Deleting ReplicationController consumer-ctrl took: 34.776654ms Jan 27 20:44:02.198: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.561378ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 27 20:44:03.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1494" for this suite. �[38;5;243m01/27/23 20:44:03.694�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mPreemptionExecutionPath�[0m �[1mruns ReplicaSets to verify preemption running path [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:543�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:44:03.73�[0m Jan 27 20:44:03.730: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/27/23 20:44:03.732�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:44:03.831�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:44:03.892�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 27 20:44:04.056: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 20:45:04.298: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:45:04.33�[0m Jan 27 20:45:04.330: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption-path �[38;5;243m01/27/23 20:45:04.332�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:45:04.431�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:45:04.493�[0m [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:496 �[1mSTEP:�[0m Finding an available node �[38;5;243m01/27/23 20:45:04.554�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/27/23 20:45:04.554�[0m Jan 27 20:45:04.593: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-9908" to be "running" Jan 27 20:45:04.624: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 31.247172ms Jan 27 20:45:06.659: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066092263s Jan 27 20:45:08.657: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.064253896s Jan 27 20:45:08.657: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/27/23 20:45:08.692�[0m Jan 27 20:45:08.734: INFO: found a healthy node: capz-conf-7xz7d [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:543 Jan 27 20:45:27.242: INFO: pods created so far: [1 1 1] Jan 27 20:45:27.242: INFO: length of pods created so far: 3 Jan 27 20:45:31.312: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/framework.go:187 Jan 27 20:45:38.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-path-9908" for this suite. �[38;5;243m01/27/23 20:45:38.349�[0m [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:470 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 27 20:45:38.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-7122" for this suite. �[38;5;243m01/27/23 20:45:38.595�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","completed":35,"skipped":2822,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [95.096 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m PreemptionExecutionPath �[38;5;243mtest/e2e/scheduling/preemption.go:458�[0m runs ReplicaSets to verify preemption running path [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:543�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:44:03.73�[0m Jan 27 20:44:03.730: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/27/23 20:44:03.732�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:44:03.831�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:44:03.892�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 27 20:44:04.056: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 20:45:04.298: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:45:04.33�[0m Jan 27 20:45:04.330: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption-path �[38;5;243m01/27/23 20:45:04.332�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:45:04.431�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:45:04.493�[0m [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:496 �[1mSTEP:�[0m Finding an available node �[38;5;243m01/27/23 20:45:04.554�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/27/23 20:45:04.554�[0m Jan 27 20:45:04.593: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-9908" to be "running" Jan 27 20:45:04.624: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 31.247172ms Jan 27 20:45:06.659: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066092263s Jan 27 20:45:08.657: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.064253896s Jan 27 20:45:08.657: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/27/23 20:45:08.692�[0m Jan 27 20:45:08.734: INFO: found a healthy node: capz-conf-7xz7d [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:543 Jan 27 20:45:27.242: INFO: pods created so far: [1 1 1] Jan 27 20:45:27.242: INFO: length of pods created so far: 3 Jan 27 20:45:31.312: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/framework.go:187 Jan 27 20:45:38.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-path-9908" for this suite. �[38;5;243m01/27/23 20:45:38.349�[0m [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:470 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 27 20:45:38.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-7122" for this suite. �[38;5;243m01/27/23 20:45:38.595�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that NodeSelector is respected if not matching [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/predicates.go:438�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:45:38.827�[0m Jan 27 20:45:38.827: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m01/27/23 20:45:38.829�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:45:38.926�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:45:38.987�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 27 20:45:39.050: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 27 20:45:39.116: INFO: Waiting for terminating namespaces to be deleted... Jan 27 20:45:39.148: INFO: Logging pods the apiserver thinks is on node capz-conf-7xz7d before test Jan 27 20:45:39.184: INFO: calico-node-windows-dqk58 from calico-system started at 2023-01-27 19:18:25 +0000 UTC (2 container statuses recorded) Jan 27 20:45:39.184: INFO: Container calico-node-felix ready: true, restart count 1 Jan 27 20:45:39.184: INFO: Container calico-node-startup ready: true, restart count 0 Jan 27 20:45:39.184: INFO: containerd-logger-p8hqb from kube-system started at 2023-01-27 19:18:25 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.184: INFO: Container containerd-logger ready: true, restart count 0 Jan 27 20:45:39.184: INFO: csi-azuredisk-node-win-vgkvl from kube-system started at 2023-01-27 19:18:55 +0000 UTC (3 container statuses recorded) Jan 27 20:45:39.184: INFO: Container azuredisk ready: true, restart count 0 Jan 27 20:45:39.184: INFO: Container liveness-probe ready: true, restart count 0 Jan 27 20:45:39.184: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 27 20:45:39.184: INFO: csi-proxy-bkbqk from kube-system started at 2023-01-27 19:18:55 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.184: INFO: Container csi-proxy ready: true, restart count 0 Jan 27 20:45:39.184: INFO: kube-proxy-windows-t6bzr from kube-system started at 2023-01-27 19:18:25 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.184: INFO: Container kube-proxy ready: true, restart count 0 Jan 27 20:45:39.184: INFO: pod4 from sched-preemption-path-9908 started at 2023-01-27 20:45:31 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.184: INFO: Container pod4 ready: true, restart count 0 Jan 27 20:45:39.184: INFO: rs-pod3-szjv6 from sched-preemption-path-9908 started at 2023-01-27 20:45:21 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.184: INFO: Container pod3 ready: true, restart count 0 Jan 27 20:45:39.184: INFO: Logging pods the apiserver thinks is on node capz-conf-d9r4r before test Jan 27 20:45:39.219: INFO: calico-node-windows-v8qkl from calico-system started at 2023-01-27 19:18:17 +0000 UTC (2 container statuses recorded) Jan 27 20:45:39.219: INFO: Container calico-node-felix ready: true, restart count 0 Jan 27 20:45:39.219: INFO: Container calico-node-startup ready: true, restart count 0 Jan 27 20:45:39.219: INFO: containerd-logger-44hf4 from kube-system started at 2023-01-27 19:18:17 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.219: INFO: Container containerd-logger ready: true, restart count 0 Jan 27 20:45:39.219: INFO: csi-azuredisk-node-win-7gwtl from kube-system started at 2023-01-27 19:18:47 +0000 UTC (3 container statuses recorded) Jan 27 20:45:39.219: INFO: Container azuredisk ready: true, restart count 0 Jan 27 20:45:39.219: INFO: Container liveness-probe ready: true, restart count 0 Jan 27 20:45:39.219: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 27 20:45:39.219: INFO: csi-proxy-4r9lq from kube-system started at 2023-01-27 19:18:47 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.220: INFO: Container csi-proxy ready: true, restart count 0 Jan 27 20:45:39.220: INFO: kube-proxy-windows-685wt from kube-system started at 2023-01-27 19:18:17 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.220: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:438 �[1mSTEP:�[0m Trying to schedule Pod with nonempty NodeSelector. �[38;5;243m01/27/23 20:45:39.22�[0m �[1mSTEP:�[0m Considering event: Type = [Warning], Name = [restricted-pod.173e45559efe1072], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] �[38;5;243m01/27/23 20:45:45.529�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Jan 27 20:45:46.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-pred-1030" for this suite. �[38;5;243m01/27/23 20:45:46.55�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","completed":36,"skipped":2822,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [7.757 seconds]�[0m [sig-scheduling] SchedulerPredicates [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates that NodeSelector is respected if not matching [Conformance] �[38;5;243mtest/e2e/scheduling/predicates.go:438�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:45:38.827�[0m Jan 27 20:45:38.827: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m01/27/23 20:45:38.829�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:45:38.926�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:45:38.987�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 27 20:45:39.050: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 27 20:45:39.116: INFO: Waiting for terminating namespaces to be deleted... Jan 27 20:45:39.148: INFO: Logging pods the apiserver thinks is on node capz-conf-7xz7d before test Jan 27 20:45:39.184: INFO: calico-node-windows-dqk58 from calico-system started at 2023-01-27 19:18:25 +0000 UTC (2 container statuses recorded) Jan 27 20:45:39.184: INFO: Container calico-node-felix ready: true, restart count 1 Jan 27 20:45:39.184: INFO: Container calico-node-startup ready: true, restart count 0 Jan 27 20:45:39.184: INFO: containerd-logger-p8hqb from kube-system started at 2023-01-27 19:18:25 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.184: INFO: Container containerd-logger ready: true, restart count 0 Jan 27 20:45:39.184: INFO: csi-azuredisk-node-win-vgkvl from kube-system started at 2023-01-27 19:18:55 +0000 UTC (3 container statuses recorded) Jan 27 20:45:39.184: INFO: Container azuredisk ready: true, restart count 0 Jan 27 20:45:39.184: INFO: Container liveness-probe ready: true, restart count 0 Jan 27 20:45:39.184: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 27 20:45:39.184: INFO: csi-proxy-bkbqk from kube-system started at 2023-01-27 19:18:55 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.184: INFO: Container csi-proxy ready: true, restart count 0 Jan 27 20:45:39.184: INFO: kube-proxy-windows-t6bzr from kube-system started at 2023-01-27 19:18:25 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.184: INFO: Container kube-proxy ready: true, restart count 0 Jan 27 20:45:39.184: INFO: pod4 from sched-preemption-path-9908 started at 2023-01-27 20:45:31 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.184: INFO: Container pod4 ready: true, restart count 0 Jan 27 20:45:39.184: INFO: rs-pod3-szjv6 from sched-preemption-path-9908 started at 2023-01-27 20:45:21 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.184: INFO: Container pod3 ready: true, restart count 0 Jan 27 20:45:39.184: INFO: Logging pods the apiserver thinks is on node capz-conf-d9r4r before test Jan 27 20:45:39.219: INFO: calico-node-windows-v8qkl from calico-system started at 2023-01-27 19:18:17 +0000 UTC (2 container statuses recorded) Jan 27 20:45:39.219: INFO: Container calico-node-felix ready: true, restart count 0 Jan 27 20:45:39.219: INFO: Container calico-node-startup ready: true, restart count 0 Jan 27 20:45:39.219: INFO: containerd-logger-44hf4 from kube-system started at 2023-01-27 19:18:17 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.219: INFO: Container containerd-logger ready: true, restart count 0 Jan 27 20:45:39.219: INFO: csi-azuredisk-node-win-7gwtl from kube-system started at 2023-01-27 19:18:47 +0000 UTC (3 container statuses recorded) Jan 27 20:45:39.219: INFO: Container azuredisk ready: true, restart count 0 Jan 27 20:45:39.219: INFO: Container liveness-probe ready: true, restart count 0 Jan 27 20:45:39.219: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 27 20:45:39.219: INFO: csi-proxy-4r9lq from kube-system started at 2023-01-27 19:18:47 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.220: INFO: Container csi-proxy ready: true, restart count 0 Jan 27 20:45:39.220: INFO: kube-proxy-windows-685wt from kube-system started at 2023-01-27 19:18:17 +0000 UTC (1 container statuses recorded) Jan 27 20:45:39.220: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:438 �[1mSTEP:�[0m Trying to schedule Pod with nonempty NodeSelector. �[38;5;243m01/27/23 20:45:39.22�[0m �[1mSTEP:�[0m Considering event: Type = [Warning], Name = [restricted-pod.173e45559efe1072], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] �[38;5;243m01/27/23 20:45:45.529�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Jan 27 20:45:46.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-pred-1030" for this suite. �[38;5;243m01/27/23 20:45:46.55�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] ControllerRevision [Serial]�[0m �[1mshould manage the lifecycle of a ControllerRevision [Conformance]�[0m �[38;5;243mtest/e2e/apps/controller_revision.go:124�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:45:46.596�[0m Jan 27 20:45:46.597: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename controllerrevisions �[38;5;243m01/27/23 20:45:46.598�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:45:46.695�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:45:46.755�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 �[1mSTEP:�[0m Creating DaemonSet "e2e-hknnx-daemon-set" �[38;5;243m01/27/23 20:45:46.949�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/27/23 20:45:46.986�[0m Jan 27 20:45:47.025: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:47.060: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 0 Jan 27 20:45:47.060: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:45:48.094: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:48.126: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 0 Jan 27 20:45:48.126: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:45:49.094: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:49.127: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 0 Jan 27 20:45:49.127: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:45:50.098: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:50.159: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 0 Jan 27 20:45:50.159: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:45:51.093: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:51.125: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 1 Jan 27 20:45:51.125: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:45:52.097: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:52.128: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 2 Jan 27 20:45:52.128: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-hknnx-daemon-set �[1mSTEP:�[0m Confirm DaemonSet "e2e-hknnx-daemon-set" successfully created with "daemonset-name=e2e-hknnx-daemon-set" label �[38;5;243m01/27/23 20:45:52.16�[0m �[1mSTEP:�[0m Listing all ControllerRevisions with label "daemonset-name=e2e-hknnx-daemon-set" �[38;5;243m01/27/23 20:45:52.223�[0m Jan 27 20:45:52.255: INFO: Located ControllerRevision: "e2e-hknnx-daemon-set-5f5bdd78b4" �[1mSTEP:�[0m Patching ControllerRevision "e2e-hknnx-daemon-set-5f5bdd78b4" �[38;5;243m01/27/23 20:45:52.286�[0m Jan 27 20:45:52.323: INFO: e2e-hknnx-daemon-set-5f5bdd78b4 has been patched �[1mSTEP:�[0m Create a new ControllerRevision �[38;5;243m01/27/23 20:45:52.323�[0m Jan 27 20:45:52.364: INFO: Created ControllerRevision: e2e-hknnx-daemon-set-d46586694 �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m01/27/23 20:45:52.364�[0m Jan 27 20:45:52.364: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 27 20:45:52.395: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Deleting ControllerRevision "e2e-hknnx-daemon-set-5f5bdd78b4" �[38;5;243m01/27/23 20:45:52.395�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m01/27/23 20:45:52.434�[0m Jan 27 20:45:52.435: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 27 20:45:52.471: INFO: Found 1 ControllerRevisions �[1mSTEP:�[0m Updating ControllerRevision "e2e-hknnx-daemon-set-d46586694" �[38;5;243m01/27/23 20:45:52.502�[0m Jan 27 20:45:52.569: INFO: e2e-hknnx-daemon-set-d46586694 has been updated �[1mSTEP:�[0m Generate another ControllerRevision by patching the Daemonset �[38;5;243m01/27/23 20:45:52.569�[0m W0127 20:45:52.609594 13 warnings.go:70] unknown field "updateStrategy" �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m01/27/23 20:45:52.609�[0m Jan 27 20:45:52.609: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 27 20:45:52.644: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-hknnx-daemon-set-d46586694=updated" �[38;5;243m01/27/23 20:45:52.644�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m01/27/23 20:45:52.685�[0m Jan 27 20:45:52.685: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 27 20:45:52.716: INFO: Found 1 ControllerRevisions Jan 27 20:45:52.748: INFO: ControllerRevision "e2e-hknnx-daemon-set-6cf5f88d87" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 �[1mSTEP:�[0m Deleting DaemonSet "e2e-hknnx-daemon-set" �[38;5;243m01/27/23 20:45:52.779�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions e2e-hknnx-daemon-set in namespace controllerrevisions-8762, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:45:52.779�[0m Jan 27 20:45:52.899: INFO: Deleting DaemonSet.extensions e2e-hknnx-daemon-set took: 37.790176ms Jan 27 20:45:52.999: INFO: Terminating DaemonSet.extensions e2e-hknnx-daemon-set pods took: 100.337203ms Jan 27 20:45:57.831: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 0 Jan 27 20:45:57.831: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-hknnx-daemon-set Jan 27 20:45:57.863: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"24558"},"items":null} Jan 27 20:45:57.896: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"24559"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:187 Jan 27 20:45:57.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "controllerrevisions-8762" for this suite. �[38;5;243m01/27/23 20:45:58.028�[0m {"msg":"PASSED [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]","completed":37,"skipped":3033,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [11.467 seconds]�[0m [sig-apps] ControllerRevision [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should manage the lifecycle of a ControllerRevision [Conformance] �[38;5;243mtest/e2e/apps/controller_revision.go:124�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:45:46.596�[0m Jan 27 20:45:46.597: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename controllerrevisions �[38;5;243m01/27/23 20:45:46.598�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:45:46.695�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:45:46.755�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 �[1mSTEP:�[0m Creating DaemonSet "e2e-hknnx-daemon-set" �[38;5;243m01/27/23 20:45:46.949�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/27/23 20:45:46.986�[0m Jan 27 20:45:47.025: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:47.060: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 0 Jan 27 20:45:47.060: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:45:48.094: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:48.126: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 0 Jan 27 20:45:48.126: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:45:49.094: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:49.127: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 0 Jan 27 20:45:49.127: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:45:50.098: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:50.159: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 0 Jan 27 20:45:50.159: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:45:51.093: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:51.125: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 1 Jan 27 20:45:51.125: INFO: Node capz-conf-7xz7d is running 0 daemon pod, expected 1 Jan 27 20:45:52.097: INFO: DaemonSet pods can't tolerate node capz-conf-sz5101-control-plane-s42fq with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 27 20:45:52.128: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 2 Jan 27 20:45:52.128: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-hknnx-daemon-set �[1mSTEP:�[0m Confirm DaemonSet "e2e-hknnx-daemon-set" successfully created with "daemonset-name=e2e-hknnx-daemon-set" label �[38;5;243m01/27/23 20:45:52.16�[0m �[1mSTEP:�[0m Listing all ControllerRevisions with label "daemonset-name=e2e-hknnx-daemon-set" �[38;5;243m01/27/23 20:45:52.223�[0m Jan 27 20:45:52.255: INFO: Located ControllerRevision: "e2e-hknnx-daemon-set-5f5bdd78b4" �[1mSTEP:�[0m Patching ControllerRevision "e2e-hknnx-daemon-set-5f5bdd78b4" �[38;5;243m01/27/23 20:45:52.286�[0m Jan 27 20:45:52.323: INFO: e2e-hknnx-daemon-set-5f5bdd78b4 has been patched �[1mSTEP:�[0m Create a new ControllerRevision �[38;5;243m01/27/23 20:45:52.323�[0m Jan 27 20:45:52.364: INFO: Created ControllerRevision: e2e-hknnx-daemon-set-d46586694 �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m01/27/23 20:45:52.364�[0m Jan 27 20:45:52.364: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 27 20:45:52.395: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Deleting ControllerRevision "e2e-hknnx-daemon-set-5f5bdd78b4" �[38;5;243m01/27/23 20:45:52.395�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m01/27/23 20:45:52.434�[0m Jan 27 20:45:52.435: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 27 20:45:52.471: INFO: Found 1 ControllerRevisions �[1mSTEP:�[0m Updating ControllerRevision "e2e-hknnx-daemon-set-d46586694" �[38;5;243m01/27/23 20:45:52.502�[0m Jan 27 20:45:52.569: INFO: e2e-hknnx-daemon-set-d46586694 has been updated �[1mSTEP:�[0m Generate another ControllerRevision by patching the Daemonset �[38;5;243m01/27/23 20:45:52.569�[0m W0127 20:45:52.609594 13 warnings.go:70] unknown field "updateStrategy" �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m01/27/23 20:45:52.609�[0m Jan 27 20:45:52.609: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 27 20:45:52.644: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-hknnx-daemon-set-d46586694=updated" �[38;5;243m01/27/23 20:45:52.644�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m01/27/23 20:45:52.685�[0m Jan 27 20:45:52.685: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 27 20:45:52.716: INFO: Found 1 ControllerRevisions Jan 27 20:45:52.748: INFO: ControllerRevision "e2e-hknnx-daemon-set-6cf5f88d87" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 �[1mSTEP:�[0m Deleting DaemonSet "e2e-hknnx-daemon-set" �[38;5;243m01/27/23 20:45:52.779�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions e2e-hknnx-daemon-set in namespace controllerrevisions-8762, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 20:45:52.779�[0m Jan 27 20:45:52.899: INFO: Deleting DaemonSet.extensions e2e-hknnx-daemon-set took: 37.790176ms Jan 27 20:45:52.999: INFO: Terminating DaemonSet.extensions e2e-hknnx-daemon-set pods took: 100.337203ms Jan 27 20:45:57.831: INFO: Number of nodes with available pods controlled by daemonset e2e-hknnx-daemon-set: 0 Jan 27 20:45:57.831: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-hknnx-daemon-set Jan 27 20:45:57.863: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"24558"},"items":null} Jan 27 20:45:57.896: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"24559"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:187 Jan 27 20:45:57.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "controllerrevisions-8762" for this suite. �[38;5;243m01/27/23 20:45:58.028�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] StatefulSet �[38;5;243mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mScaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/apps/statefulset.go:585�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:45:58.071�[0m Jan 27 20:45:58.071: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/27/23 20:45:58.072�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:45:58.169�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:45:58.232�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-2736 �[38;5;243m01/27/23 20:45:58.294�[0m [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:585 �[1mSTEP:�[0m Initializing watcher for selector baz=blah,foo=bar �[38;5;243m01/27/23 20:45:58.329�[0m �[1mSTEP:�[0m Creating stateful set ss in namespace statefulset-2736 �[38;5;243m01/27/23 20:45:58.362�[0m �[1mSTEP:�[0m Waiting until all stateful set ss replicas will be running in namespace statefulset-2736 �[38;5;243m01/27/23 20:45:58.397�[0m Jan 27 20:45:58.429: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 20:46:08.461: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Confirming that stateful set scale up will halt with unhealthy stateful pod �[38;5;243m01/27/23 20:46:08.461�[0m Jan 27 20:46:08.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:46:09.108: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:46:09.109: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:46:09.109: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:46:09.141: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 27 20:46:19.176: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:46:19.176: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:46:19.305: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999622s Jan 27 20:46:20.337: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.968750759s Jan 27 20:46:21.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.936184289s Jan 27 20:46:22.403: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.903480274s Jan 27 20:46:23.434: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.871075059s Jan 27 20:46:24.467: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.839196331s Jan 27 20:46:25.503: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.805887681s Jan 27 20:46:26.536: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.769880624s Jan 27 20:46:27.569: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.73726452s Jan 27 20:46:28.601: INFO: Verifying statefulset ss doesn't scale past 1 for another 704.750995ms �[1mSTEP:�[0m Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2736 �[38;5;243m01/27/23 20:46:29.601�[0m Jan 27 20:46:29.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:46:30.145: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 27 20:46:30.145: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:46:30.145: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:46:30.177: INFO: Found 1 stateful pods, waiting for 3 Jan 27 20:46:40.211: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:46:40.211: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:46:40.211: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 20:46:50.210: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:46:50.210: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:46:50.211: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Verifying that stateful set ss was scaled up in order �[38;5;243m01/27/23 20:46:50.211�[0m �[1mSTEP:�[0m Scale down will halt with unhealthy stateful pod �[38;5;243m01/27/23 20:46:50.211�[0m Jan 27 20:46:50.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:46:50.969: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:46:50.969: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:46:50.969: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:46:50.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:46:51.564: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:46:51.564: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:46:51.564: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:46:51.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:46:52.115: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:46:52.115: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:46:52.115: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:46:52.115: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:46:52.147: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 27 20:47:02.212: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:47:02.212: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:47:02.212: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:47:02.311: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999601s Jan 27 20:47:03.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.967824414s Jan 27 20:47:04.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.934769898s Jan 27 20:47:05.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.889982918s Jan 27 20:47:06.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.856009134s Jan 27 20:47:07.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.820624462s Jan 27 20:47:08.525: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.787129011s Jan 27 20:47:09.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.753641341s Jan 27 20:47:10.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.719758505s Jan 27 20:47:11.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 686.539694ms �[1mSTEP:�[0m Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2736 �[38;5;243m01/27/23 20:47:12.626�[0m Jan 27 20:47:12.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:47:13.243: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 27 20:47:13.243: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:47:13.243: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:47:13.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:47:13.833: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 27 20:47:13.833: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:47:13.833: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:47:13.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:47:14.355: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 27 20:47:14.355: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:47:14.355: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:47:14.355: INFO: Scaling statefulset ss to 0 �[1mSTEP:�[0m Verifying that stateful set ss was scaled down in reverse order �[38;5;243m01/27/23 20:47:34.484�[0m [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 27 20:47:34.485: INFO: Deleting all statefulset in ns statefulset-2736 Jan 27 20:47:34.516: INFO: Scaling statefulset ss to 0 Jan 27 20:47:34.611: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:47:34.643: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 Jan 27 20:47:34.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "statefulset-2736" for this suite. �[38;5;243m01/27/23 20:47:34.774�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","completed":38,"skipped":3161,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [96.740 seconds]�[0m [sig-apps] StatefulSet �[38;5;243mtest/e2e/apps/framework.go:23�[0m Basic StatefulSet functionality [StatefulSetBasic] �[38;5;243mtest/e2e/apps/statefulset.go:101�[0m Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] �[38;5;243mtest/e2e/apps/statefulset.go:585�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:45:58.071�[0m Jan 27 20:45:58.071: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/27/23 20:45:58.072�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:45:58.169�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:45:58.232�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-2736 �[38;5;243m01/27/23 20:45:58.294�[0m [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:585 �[1mSTEP:�[0m Initializing watcher for selector baz=blah,foo=bar �[38;5;243m01/27/23 20:45:58.329�[0m �[1mSTEP:�[0m Creating stateful set ss in namespace statefulset-2736 �[38;5;243m01/27/23 20:45:58.362�[0m �[1mSTEP:�[0m Waiting until all stateful set ss replicas will be running in namespace statefulset-2736 �[38;5;243m01/27/23 20:45:58.397�[0m Jan 27 20:45:58.429: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 20:46:08.461: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Confirming that stateful set scale up will halt with unhealthy stateful pod �[38;5;243m01/27/23 20:46:08.461�[0m Jan 27 20:46:08.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:46:09.108: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:46:09.109: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:46:09.109: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:46:09.141: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 27 20:46:19.176: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:46:19.176: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:46:19.305: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999622s Jan 27 20:46:20.337: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.968750759s Jan 27 20:46:21.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.936184289s Jan 27 20:46:22.403: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.903480274s Jan 27 20:46:23.434: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.871075059s Jan 27 20:46:24.467: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.839196331s Jan 27 20:46:25.503: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.805887681s Jan 27 20:46:26.536: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.769880624s Jan 27 20:46:27.569: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.73726452s Jan 27 20:46:28.601: INFO: Verifying statefulset ss doesn't scale past 1 for another 704.750995ms �[1mSTEP:�[0m Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2736 �[38;5;243m01/27/23 20:46:29.601�[0m Jan 27 20:46:29.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:46:30.145: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 27 20:46:30.145: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:46:30.145: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:46:30.177: INFO: Found 1 stateful pods, waiting for 3 Jan 27 20:46:40.211: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:46:40.211: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:46:40.211: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 20:46:50.210: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:46:50.210: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 20:46:50.211: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Verifying that stateful set ss was scaled up in order �[38;5;243m01/27/23 20:46:50.211�[0m �[1mSTEP:�[0m Scale down will halt with unhealthy stateful pod �[38;5;243m01/27/23 20:46:50.211�[0m Jan 27 20:46:50.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:46:50.969: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:46:50.969: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:46:50.969: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:46:50.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:46:51.564: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:46:51.564: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:46:51.564: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:46:51.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 20:46:52.115: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 27 20:46:52.115: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 20:46:52.115: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 20:46:52.115: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:46:52.147: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 27 20:47:02.212: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:47:02.212: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:47:02.212: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 27 20:47:02.311: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999601s Jan 27 20:47:03.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.967824414s Jan 27 20:47:04.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.934769898s Jan 27 20:47:05.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.889982918s Jan 27 20:47:06.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.856009134s Jan 27 20:47:07.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.820624462s Jan 27 20:47:08.525: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.787129011s Jan 27 20:47:09.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.753641341s Jan 27 20:47:10.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.719758505s Jan 27 20:47:11.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 686.539694ms �[1mSTEP:�[0m Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2736 �[38;5;243m01/27/23 20:47:12.626�[0m Jan 27 20:47:12.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:47:13.243: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 27 20:47:13.243: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:47:13.243: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:47:13.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:47:13.833: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 27 20:47:13.833: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:47:13.833: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:47:13.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2736 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 20:47:14.355: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 27 20:47:14.355: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 20:47:14.355: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 20:47:14.355: INFO: Scaling statefulset ss to 0 �[1mSTEP:�[0m Verifying that stateful set ss was scaled down in reverse order �[38;5;243m01/27/23 20:47:34.484�[0m [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 27 20:47:34.485: INFO: Deleting all statefulset in ns statefulset-2736 Jan 27 20:47:34.516: INFO: Scaling statefulset ss to 0 Jan 27 20:47:34.611: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 20:47:34.643: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 Jan 27 20:47:34.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "statefulset-2736" for this suite. �[38;5;243m01/27/23 20:47:34.774�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] NoExecuteTaintManager Multiple Pods [Serial]�[0m �[1mevicts pods with minTolerationSeconds [Disruptive] [Conformance]�[0m �[38;5;243mtest/e2e/node/taints.go:420�[0m [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:47:34.83�[0m Jan 27 20:47:34.830: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename taint-multiple-pods �[38;5;243m01/27/23 20:47:34.831�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:47:34.93�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:47:34.991�[0m [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/node/taints.go:348 Jan 27 20:47:35.052: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 20:48:35.258: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] test/e2e/node/taints.go:420 Jan 27 20:48:35.290: INFO: Starting informer... �[1mSTEP:�[0m Starting pods... �[38;5;243m01/27/23 20:48:35.29�[0m Jan 27 20:48:35.392: INFO: Pod1 is running on capz-conf-7xz7d. Tainting Node Jan 27 20:48:35.458: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-7924" to be "running" Jan 27 20:48:35.489: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.329668ms Jan 27 20:48:37.523: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064653357s Jan 27 20:48:39.522: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 4.064114721s Jan 27 20:48:39.522: INFO: Pod "taint-eviction-b1" satisfied condition "running" Jan 27 20:48:39.522: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-7924" to be "running" Jan 27 20:48:39.554: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 31.994512ms Jan 27 20:48:39.554: INFO: Pod "taint-eviction-b2" satisfied condition "running" Jan 27 20:48:39.554: INFO: Pod2 is running on capz-conf-7xz7d. Tainting Node �[1mSTEP:�[0m Trying to apply a taint on the Node �[38;5;243m01/27/23 20:48:39.554�[0m �[1mSTEP:�[0m verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[38;5;243m01/27/23 20:48:39.63�[0m �[1mSTEP:�[0m Waiting for Pod1 and Pod2 to be deleted �[38;5;243m01/27/23 20:48:39.681�[0m Jan 27 20:48:46.429: INFO: Noticed Pod "taint-eviction-b1" gets evicted. Jan 27 20:49:06.615: INFO: Noticed Pod "taint-eviction-b2" gets evicted. �[1mSTEP:�[0m verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[38;5;243m01/27/23 20:49:06.687�[0m [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/framework.go:187 Jan 27 20:49:06.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "taint-multiple-pods-7924" for this suite. �[38;5;243m01/27/23 20:49:06.766�[0m {"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","completed":39,"skipped":3442,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [91.974 seconds]�[0m [sig-node] NoExecuteTaintManager Multiple Pods [Serial] �[38;5;243mtest/e2e/node/framework.go:23�[0m evicts pods with minTolerationSeconds [Disruptive] [Conformance] �[38;5;243mtest/e2e/node/taints.go:420�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:47:34.83�[0m Jan 27 20:47:34.830: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename taint-multiple-pods �[38;5;243m01/27/23 20:47:34.831�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:47:34.93�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:47:34.991�[0m [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/node/taints.go:348 Jan 27 20:47:35.052: INFO: Waiting up to 1m0s for all nodes to be ready Jan 27 20:48:35.258: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] test/e2e/node/taints.go:420 Jan 27 20:48:35.290: INFO: Starting informer... �[1mSTEP:�[0m Starting pods... �[38;5;243m01/27/23 20:48:35.29�[0m Jan 27 20:48:35.392: INFO: Pod1 is running on capz-conf-7xz7d. Tainting Node Jan 27 20:48:35.458: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-7924" to be "running" Jan 27 20:48:35.489: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.329668ms Jan 27 20:48:37.523: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064653357s Jan 27 20:48:39.522: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 4.064114721s Jan 27 20:48:39.522: INFO: Pod "taint-eviction-b1" satisfied condition "running" Jan 27 20:48:39.522: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-7924" to be "running" Jan 27 20:48:39.554: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 31.994512ms Jan 27 20:48:39.554: INFO: Pod "taint-eviction-b2" satisfied condition "running" Jan 27 20:48:39.554: INFO: Pod2 is running on capz-conf-7xz7d. Tainting Node �[1mSTEP:�[0m Trying to apply a taint on the Node �[38;5;243m01/27/23 20:48:39.554�[0m �[1mSTEP:�[0m verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[38;5;243m01/27/23 20:48:39.63�[0m �[1mSTEP:�[0m Waiting for Pod1 and Pod2 to be deleted �[38;5;243m01/27/23 20:48:39.681�[0m Jan 27 20:48:46.429: INFO: Noticed Pod "taint-eviction-b1" gets evicted. Jan 27 20:49:06.615: INFO: Noticed Pod "taint-eviction-b2" gets evicted. �[1mSTEP:�[0m verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[38;5;243m01/27/23 20:49:06.687�[0m [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/framework.go:187 Jan 27 20:49:06.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "taint-multiple-pods-7924" for this suite. �[38;5;243m01/27/23 20:49:06.766�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:550�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:49:06.819�[0m Jan 27 20:49:06.819: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:49:06.822�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:49:06.921�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:49:06.983�[0m [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] test/e2e/apimachinery/garbage_collector.go:550 �[1mSTEP:�[0m create the deployment �[38;5;243m01/27/23 20:49:07.045�[0m �[1mSTEP:�[0m Wait for the Deployment to create new ReplicaSet �[38;5;243m01/27/23 20:49:07.081�[0m �[1mSTEP:�[0m delete the deployment �[38;5;243m01/27/23 20:49:07.29�[0m �[1mSTEP:�[0m wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[38;5;243m01/27/23 20:49:07.329�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 20:49:08.029�[0m Jan 27 20:49:08.135: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 20:49:08.168: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 32.477761ms Jan 27 20:49:08.168: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 20:49:08.168: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 20:49:08.526: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:49:08.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-2274" for this suite. �[38;5;243m01/27/23 20:49:08.56�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","completed":40,"skipped":3716,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [1.776 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:550�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:49:06.819�[0m Jan 27 20:49:06.819: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:49:06.822�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:49:06.921�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:49:06.983�[0m [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] test/e2e/apimachinery/garbage_collector.go:550 �[1mSTEP:�[0m create the deployment �[38;5;243m01/27/23 20:49:07.045�[0m �[1mSTEP:�[0m Wait for the Deployment to create new ReplicaSet �[38;5;243m01/27/23 20:49:07.081�[0m �[1mSTEP:�[0m delete the deployment �[38;5;243m01/27/23 20:49:07.29�[0m �[1mSTEP:�[0m wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[38;5;243m01/27/23 20:49:07.329�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 20:49:08.029�[0m Jan 27 20:49:08.135: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 20:49:08.168: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 32.477761ms Jan 27 20:49:08.168: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 20:49:08.168: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 20:49:08.526: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:49:08.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-2274" for this suite. �[38;5;243m01/27/23 20:49:08.56�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:650�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:49:08.599�[0m Jan 27 20:49:08.599: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:49:08.6�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:49:08.701�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:49:08.761�[0m [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] test/e2e/apimachinery/garbage_collector.go:650 �[1mSTEP:�[0m create the rc �[38;5;243m01/27/23 20:49:08.856�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m01/27/23 20:49:13.922�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m01/27/23 20:49:13.957�[0m Jan 27 20:49:15.040: INFO: 80 pods remaining Jan 27 20:49:15.040: INFO: 80 pods has nil DeletionTimestamp Jan 27 20:49:15.040: INFO: Jan 27 20:49:16.041: INFO: 68 pods remaining Jan 27 20:49:16.042: INFO: 68 pods has nil DeletionTimestamp Jan 27 20:49:16.042: INFO: Jan 27 20:49:17.033: INFO: 60 pods remaining Jan 27 20:49:17.033: INFO: 60 pods has nil DeletionTimestamp Jan 27 20:49:17.033: INFO: Jan 27 20:49:18.029: INFO: 40 pods remaining Jan 27 20:49:18.029: INFO: 40 pods has nil DeletionTimestamp Jan 27 20:49:18.029: INFO: Jan 27 20:49:19.035: INFO: 29 pods remaining Jan 27 20:49:19.035: INFO: 28 pods has nil DeletionTimestamp Jan 27 20:49:19.035: INFO: Jan 27 20:49:20.025: INFO: 20 pods remaining Jan 27 20:49:20.025: INFO: 20 pods has nil DeletionTimestamp Jan 27 20:49:20.025: INFO: �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 20:49:21.022�[0m Jan 27 20:49:21.129: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 20:49:21.161: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 32.230866ms Jan 27 20:49:21.161: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 20:49:21.161: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 20:49:21.549: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:49:21.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-4584" for this suite. �[38;5;243m01/27/23 20:49:21.585�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","completed":41,"skipped":3755,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [13.022 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:650�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:49:08.599�[0m Jan 27 20:49:08.599: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/27/23 20:49:08.6�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:49:08.701�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:49:08.761�[0m [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] test/e2e/apimachinery/garbage_collector.go:650 �[1mSTEP:�[0m create the rc �[38;5;243m01/27/23 20:49:08.856�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m01/27/23 20:49:13.922�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m01/27/23 20:49:13.957�[0m Jan 27 20:49:15.040: INFO: 80 pods remaining Jan 27 20:49:15.040: INFO: 80 pods has nil DeletionTimestamp Jan 27 20:49:15.040: INFO: Jan 27 20:49:16.041: INFO: 68 pods remaining Jan 27 20:49:16.042: INFO: 68 pods has nil DeletionTimestamp Jan 27 20:49:16.042: INFO: Jan 27 20:49:17.033: INFO: 60 pods remaining Jan 27 20:49:17.033: INFO: 60 pods has nil DeletionTimestamp Jan 27 20:49:17.033: INFO: Jan 27 20:49:18.029: INFO: 40 pods remaining Jan 27 20:49:18.029: INFO: 40 pods has nil DeletionTimestamp Jan 27 20:49:18.029: INFO: Jan 27 20:49:19.035: INFO: 29 pods remaining Jan 27 20:49:19.035: INFO: 28 pods has nil DeletionTimestamp Jan 27 20:49:19.035: INFO: Jan 27 20:49:20.025: INFO: 20 pods remaining Jan 27 20:49:20.025: INFO: 20 pods has nil DeletionTimestamp Jan 27 20:49:20.025: INFO: �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/27/23 20:49:21.022�[0m Jan 27 20:49:21.129: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" in namespace "kube-system" to be "running and ready" Jan 27 20:49:21.161: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq": Phase="Running", Reason="", readiness=true. Elapsed: 32.230866ms Jan 27 20:49:21.161: INFO: The phase of Pod kube-controller-manager-capz-conf-sz5101-control-plane-s42fq is Running (Ready = true) Jan 27 20:49:21.161: INFO: Pod "kube-controller-manager-capz-conf-sz5101-control-plane-s42fq" satisfied condition "running and ready" Jan 27 20:49:21.549: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 27 20:49:21.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-4584" for this suite. �[38;5;243m01/27/23 20:49:21.585�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[38;5;243mGMSA support�[0m �[1mworks end to end�[0m �[38;5;243mtest/e2e/windows/gmsa_full.go:97�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:49:21.625�[0m Jan 27 20:49:21.625: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-full-test-windows �[38;5;243m01/27/23 20:49:21.627�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:49:21.726�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:49:21.786�[0m [It] works end to end test/e2e/windows/gmsa_full.go:97 �[1mSTEP:�[0m finding the worker node that fulfills this test's assumptions �[38;5;243m01/27/23 20:49:21.848�[0m Jan 27 20:49:21.880: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 27 20:49:21.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gmsa-full-test-windows-7986" for this suite. �[38;5;243m01/27/23 20:49:21.914�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS [SKIPPED] [0.324 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m GMSA support �[38;5;243mtest/e2e/windows/gmsa_full.go:96�[0m �[38;5;14m�[1m[It] works end to end�[0m �[38;5;243mtest/e2e/windows/gmsa_full.go:97�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:49:21.625�[0m Jan 27 20:49:21.625: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-full-test-windows �[38;5;243m01/27/23 20:49:21.627�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:49:21.726�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:49:21.786�[0m [It] works end to end test/e2e/windows/gmsa_full.go:97 �[1mSTEP:�[0m finding the worker node that fulfills this test's assumptions �[38;5;243m01/27/23 20:49:21.848�[0m Jan 27 20:49:21.880: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 27 20:49:21.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gmsa-full-test-windows-7986" for this suite. �[38;5;243m01/27/23 20:49:21.914�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;14mExpected to find exactly one node with the "agentpool=windowsgmsa" label, found 0�[0m �[38;5;14mIn �[1m[It]�[0m�[38;5;14m at: �[1mtest/e2e/windows/gmsa_full.go:103�[0m �[38;5;14mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/windows.glob..func5.1.1() test/e2e/windows/gmsa_full.go:103 +0x5ea �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicaSet�[0m �[1mShould scale from 5 pods to 3 pods and from 3 to 1�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:53�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:49:21.951�[0m Jan 27 20:49:21.952: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:49:21.953�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:49:22.055�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:49:22.116�[0m [It] Should scale from 5 pods to 3 pods and from 3 to 1 test/e2e/autoscaling/horizontal_pod_autoscaling.go:53 �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 5 replicas �[38;5;243m01/27/23 20:49:22.177�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-6841 �[38;5;243m01/27/23 20:49:22.22�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-6841 �[38;5;243m01/27/23 20:49:22.22�[0m I0127 20:49:22.255288 13 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-6841, replica count: 5 I0127 20:49:32.306048 13 runners.go:193] rs Pods: 5 out of 5 created, 0 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 20:49:42.306568 13 runners.go:193] rs Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:49:42.306�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-6841 �[38;5;243m01/27/23 20:49:42.351�[0m I0127 20:49:42.386425 13 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-6841, replica count: 1 I0127 20:49:52.437640 13 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:49:57.437: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 27 20:49:57.469: INFO: RC rs: consume 325 millicores in total Jan 27 20:49:57.469: INFO: RC rs: setting consumption to 325 millicores in total Jan 27 20:49:57.470: INFO: RC rs: consume 0 MB in total Jan 27 20:49:57.470: INFO: RC rs: consume custom metric 0 in total Jan 27 20:49:57.470: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:49:57.470: INFO: RC rs: disabling consumption of custom metric QPS Jan 27 20:49:57.470: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:49:57.470: INFO: RC rs: disabling mem consumption Jan 27 20:49:57.541: INFO: waiting for 3 replicas (current: 5) Jan 27 20:50:17.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:50:27.529: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:50:27.529: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:50:37.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:50:57.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:50:57.574: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:50:57.574: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:51:17.575: INFO: waiting for 3 replicas (current: 5) Jan 27 20:51:27.620: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:51:27.620: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:51:37.575: INFO: waiting for 3 replicas (current: 5) Jan 27 20:51:57.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:51:57.666: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:51:57.666: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:52:17.576: INFO: waiting for 3 replicas (current: 5) Jan 27 20:52:27.709: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:52:27.709: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:52:37.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:52:57.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:52:57.751: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:52:57.752: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:53:17.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:53:27.792: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:53:27.793: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:53:37.575: INFO: waiting for 3 replicas (current: 5) Jan 27 20:53:57.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:53:57.833: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:53:57.833: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:54:17.576: INFO: waiting for 3 replicas (current: 5) Jan 27 20:54:27.874: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:54:27.874: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:54:37.576: INFO: waiting for 3 replicas (current: 5) Jan 27 20:54:57.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:54:57.914: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:54:57.915: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:55:17.575: INFO: waiting for 3 replicas (current: 3) Jan 27 20:55:17.575: INFO: RC rs: consume 10 millicores in total Jan 27 20:55:17.575: INFO: RC rs: setting consumption to 10 millicores in total Jan 27 20:55:17.606: INFO: waiting for 1 replicas (current: 3) Jan 27 20:55:27.956: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:55:27.956: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:55:37.640: INFO: waiting for 1 replicas (current: 3) Jan 27 20:55:57.639: INFO: waiting for 1 replicas (current: 3) Jan 27 20:55:58.007: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:55:58.007: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:56:17.641: INFO: waiting for 1 replicas (current: 3) Jan 27 20:56:28.046: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:56:28.046: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:56:37.640: INFO: waiting for 1 replicas (current: 3) Jan 27 20:56:57.639: INFO: waiting for 1 replicas (current: 3) Jan 27 20:56:58.086: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:56:58.086: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:57:17.640: INFO: waiting for 1 replicas (current: 3) Jan 27 20:57:28.125: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:57:28.125: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:57:37.641: INFO: waiting for 1 replicas (current: 3) Jan 27 20:57:57.639: INFO: waiting for 1 replicas (current: 3) Jan 27 20:57:58.164: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:57:58.164: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:58:17.640: INFO: waiting for 1 replicas (current: 3) Jan 27 20:58:28.209: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:58:28.210: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:58:37.640: INFO: waiting for 1 replicas (current: 3) Jan 27 20:58:57.638: INFO: waiting for 1 replicas (current: 3) Jan 27 20:58:58.248: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:58:58.248: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:59:17.638: INFO: waiting for 1 replicas (current: 3) Jan 27 20:59:28.287: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:59:28.288: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:59:37.639: INFO: waiting for 1 replicas (current: 3) Jan 27 20:59:57.639: INFO: waiting for 1 replicas (current: 3) Jan 27 20:59:58.326: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:59:58.326: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 21:00:17.639: INFO: waiting for 1 replicas (current: 2) Jan 27 21:00:28.366: INFO: RC rs: sending request to consume 10 millicores Jan 27 21:00:28.366: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 21:00:37.638: INFO: waiting for 1 replicas (current: 2) Jan 27 21:00:57.641: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m01/27/23 21:00:57.677�[0m Jan 27 21:00:57.677: INFO: RC rs: stopping metric consumer Jan 27 21:00:57.677: INFO: RC rs: stopping CPU consumer Jan 27 21:00:57.677: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-6841, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 21:01:07.678�[0m Jan 27 21:01:07.797: INFO: Deleting ReplicaSet.apps rs took: 35.647747ms Jan 27 21:01:07.897: INFO: Terminating ReplicaSet.apps rs pods took: 100.886425ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-6841, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 21:01:09.658�[0m Jan 27 21:01:09.776: INFO: Deleting ReplicationController rs-ctrl took: 36.002378ms Jan 27 21:01:09.877: INFO: Terminating ReplicationController rs-ctrl pods took: 100.692371ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 21:01:11.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-6841" for this suite. �[38;5;243m01/27/23 21:01:11.777�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1","completed":42,"skipped":3843,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [709.864 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:48�[0m Should scale from 5 pods to 3 pods and from 3 to 1 �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:53�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 20:49:21.951�[0m Jan 27 20:49:21.952: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 20:49:21.953�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 20:49:22.055�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 20:49:22.116�[0m [It] Should scale from 5 pods to 3 pods and from 3 to 1 test/e2e/autoscaling/horizontal_pod_autoscaling.go:53 �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 5 replicas �[38;5;243m01/27/23 20:49:22.177�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-6841 �[38;5;243m01/27/23 20:49:22.22�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-6841 �[38;5;243m01/27/23 20:49:22.22�[0m I0127 20:49:22.255288 13 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-6841, replica count: 5 I0127 20:49:32.306048 13 runners.go:193] rs Pods: 5 out of 5 created, 0 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 20:49:42.306568 13 runners.go:193] rs Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 20:49:42.306�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-6841 �[38;5;243m01/27/23 20:49:42.351�[0m I0127 20:49:42.386425 13 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-6841, replica count: 1 I0127 20:49:52.437640 13 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 20:49:57.437: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 27 20:49:57.469: INFO: RC rs: consume 325 millicores in total Jan 27 20:49:57.469: INFO: RC rs: setting consumption to 325 millicores in total Jan 27 20:49:57.470: INFO: RC rs: consume 0 MB in total Jan 27 20:49:57.470: INFO: RC rs: consume custom metric 0 in total Jan 27 20:49:57.470: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:49:57.470: INFO: RC rs: disabling consumption of custom metric QPS Jan 27 20:49:57.470: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:49:57.470: INFO: RC rs: disabling mem consumption Jan 27 20:49:57.541: INFO: waiting for 3 replicas (current: 5) Jan 27 20:50:17.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:50:27.529: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:50:27.529: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:50:37.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:50:57.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:50:57.574: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:50:57.574: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:51:17.575: INFO: waiting for 3 replicas (current: 5) Jan 27 20:51:27.620: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:51:27.620: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:51:37.575: INFO: waiting for 3 replicas (current: 5) Jan 27 20:51:57.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:51:57.666: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:51:57.666: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:52:17.576: INFO: waiting for 3 replicas (current: 5) Jan 27 20:52:27.709: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:52:27.709: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:52:37.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:52:57.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:52:57.751: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:52:57.752: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:53:17.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:53:27.792: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:53:27.793: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:53:37.575: INFO: waiting for 3 replicas (current: 5) Jan 27 20:53:57.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:53:57.833: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:53:57.833: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:54:17.576: INFO: waiting for 3 replicas (current: 5) Jan 27 20:54:27.874: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:54:27.874: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:54:37.576: INFO: waiting for 3 replicas (current: 5) Jan 27 20:54:57.574: INFO: waiting for 3 replicas (current: 5) Jan 27 20:54:57.914: INFO: RC rs: sending request to consume 325 millicores Jan 27 20:54:57.915: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 27 20:55:17.575: INFO: waiting for 3 replicas (current: 3) Jan 27 20:55:17.575: INFO: RC rs: consume 10 millicores in total Jan 27 20:55:17.575: INFO: RC rs: setting consumption to 10 millicores in total Jan 27 20:55:17.606: INFO: waiting for 1 replicas (current: 3) Jan 27 20:55:27.956: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:55:27.956: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:55:37.640: INFO: waiting for 1 replicas (current: 3) Jan 27 20:55:57.639: INFO: waiting for 1 replicas (current: 3) Jan 27 20:55:58.007: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:55:58.007: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:56:17.641: INFO: waiting for 1 replicas (current: 3) Jan 27 20:56:28.046: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:56:28.046: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:56:37.640: INFO: waiting for 1 replicas (current: 3) Jan 27 20:56:57.639: INFO: waiting for 1 replicas (current: 3) Jan 27 20:56:58.086: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:56:58.086: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:57:17.640: INFO: waiting for 1 replicas (current: 3) Jan 27 20:57:28.125: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:57:28.125: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:57:37.641: INFO: waiting for 1 replicas (current: 3) Jan 27 20:57:57.639: INFO: waiting for 1 replicas (current: 3) Jan 27 20:57:58.164: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:57:58.164: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:58:17.640: INFO: waiting for 1 replicas (current: 3) Jan 27 20:58:28.209: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:58:28.210: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:58:37.640: INFO: waiting for 1 replicas (current: 3) Jan 27 20:58:57.638: INFO: waiting for 1 replicas (current: 3) Jan 27 20:58:58.248: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:58:58.248: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:59:17.638: INFO: waiting for 1 replicas (current: 3) Jan 27 20:59:28.287: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:59:28.288: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 20:59:37.639: INFO: waiting for 1 replicas (current: 3) Jan 27 20:59:57.639: INFO: waiting for 1 replicas (current: 3) Jan 27 20:59:58.326: INFO: RC rs: sending request to consume 10 millicores Jan 27 20:59:58.326: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 21:00:17.639: INFO: waiting for 1 replicas (current: 2) Jan 27 21:00:28.366: INFO: RC rs: sending request to consume 10 millicores Jan 27 21:00:28.366: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6841/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 27 21:00:37.638: INFO: waiting for 1 replicas (current: 2) Jan 27 21:00:57.641: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m01/27/23 21:00:57.677�[0m Jan 27 21:00:57.677: INFO: RC rs: stopping metric consumer Jan 27 21:00:57.677: INFO: RC rs: stopping CPU consumer Jan 27 21:00:57.677: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-6841, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 21:01:07.678�[0m Jan 27 21:01:07.797: INFO: Deleting ReplicaSet.apps rs took: 35.647747ms Jan 27 21:01:07.897: INFO: Terminating ReplicaSet.apps rs pods took: 100.886425ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-6841, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 21:01:09.658�[0m Jan 27 21:01:09.776: INFO: Deleting ReplicationController rs-ctrl took: 36.002378ms Jan 27 21:01:09.877: INFO: Terminating ReplicationController rs-ctrl pods took: 100.692371ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 27 21:01:11.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-6841" for this suite. �[38;5;243m01/27/23 21:01:11.777�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] �[38;5;243mAllocatable node memory�[0m �[1mshould be equal to a calculated allocatable memory value�[0m �[38;5;243mtest/e2e/windows/memory_limits.go:54�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 21:01:11.816�[0m Jan 27 21:01:11.817: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename memory-limit-test-windows �[38;5;243m01/27/23 21:01:11.818�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 21:01:11.918�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 21:01:11.98�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should be equal to a calculated allocatable memory value test/e2e/windows/memory_limits.go:54 �[1mSTEP:�[0m Getting memory details from node status and kubelet config �[38;5;243m01/27/23 21:01:12.078�[0m Jan 27 21:01:12.078: INFO: Getting configuration details for node capz-conf-7xz7d Jan 27 21:01:12.123: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}} �[1mSTEP:�[0m Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>} BinarySI} �[38;5;243m01/27/23 21:01:12.123�[0m [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 27 21:01:12.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "memory-limit-test-windows-7050" for this suite. �[38;5;243m01/27/23 21:01:12.157�[0m {"msg":"PASSED [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] Allocatable node memory should be equal to a calculated allocatable memory value","completed":43,"skipped":3852,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [0.377 seconds]�[0m [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m Allocatable node memory �[38;5;243mtest/e2e/windows/memory_limits.go:53�[0m should be equal to a calculated allocatable memory value �[38;5;243mtest/e2e/windows/memory_limits.go:54�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 21:01:11.816�[0m Jan 27 21:01:11.817: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename memory-limit-test-windows �[38;5;243m01/27/23 21:01:11.818�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 21:01:11.918�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 21:01:11.98�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should be equal to a calculated allocatable memory value test/e2e/windows/memory_limits.go:54 �[1mSTEP:�[0m Getting memory details from node status and kubelet config �[38;5;243m01/27/23 21:01:12.078�[0m Jan 27 21:01:12.078: INFO: Getting configuration details for node capz-conf-7xz7d Jan 27 21:01:12.123: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}} �[1mSTEP:�[0m Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>} BinarySI} �[38;5;243m01/27/23 21:01:12.123�[0m [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 27 21:01:12.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "memory-limit-test-windows-7050" for this suite. �[38;5;243m01/27/23 21:01:12.157�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith scale limited by number of Pods rate�[0m �[1mshould scale down no more than given number of Pods per minute�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:253�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 21:01:12.194�[0m Jan 27 21:01:12.194: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 21:01:12.196�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 21:01:12.294�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 21:01:12.355�[0m [It] should scale down no more than given number of Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:253 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/27/23 21:01:12.416�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 6 replicas �[38;5;243m01/27/23 21:01:12.417�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-8793 �[38;5;243m01/27/23 21:01:12.463�[0m I0127 21:01:12.497852 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-8793, replica count: 6 I0127 21:01:22.549277 13 runners.go:193] consumer Pods: 6 out of 6 created, 6 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 21:01:22.549�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-8793 �[38;5;243m01/27/23 21:01:22.6�[0m I0127 21:01:22.638010 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-8793, replica count: 1 I0127 21:01:32.689241 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 21:01:37.691: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 27 21:01:37.723: INFO: RC consumer: consume 660 millicores in total Jan 27 21:01:37.723: INFO: RC consumer: setting consumption to 660 millicores in total Jan 27 21:01:37.723: INFO: RC consumer: sending request to consume 660 millicores Jan 27 21:01:37.723: INFO: RC consumer: consume 0 MB in total Jan 27 21:01:37.723: INFO: RC consumer: disabling mem consumption Jan 27 21:01:37.723: INFO: RC consumer: consume custom metric 0 in total Jan 27 21:01:37.723: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8793/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=660&requestSizeMillicores=100 } Jan 27 21:01:37.723: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m01/27/23 21:01:37.757�[0m Jan 27 21:01:37.758: INFO: RC consumer: consume 110 millicores in total Jan 27 21:01:40.771: INFO: RC consumer: setting consumption to 110 millicores in total Jan 27 21:01:40.803: INFO: waiting for 4 replicas (current: 6) Jan 27 21:02:00.837: INFO: waiting for 4 replicas (current: 4) Jan 27 21:02:00.869: INFO: waiting for 2 replicas (current: 4) Jan 27 21:02:10.772: INFO: RC consumer: sending request to consume 110 millicores Jan 27 21:02:10.772: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8793/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 21:02:20.904: INFO: waiting for 2 replicas (current: 4) Jan 27 21:02:40.812: INFO: RC consumer: sending request to consume 110 millicores Jan 27 21:02:40.812: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8793/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 27 21:02:40.901: INFO: waiting for 2 replicas (current: 4) Jan 27 21:03:00.902: INFO: waiting for 2 replicas (current: 2) �[1mSTEP:�[0m verifying time waited for a scale down to 4 replicas �[38;5;243m01/27/23 21:03:00.902�[0m �[1mSTEP:�[0m verifying time waited for a scale down to 2 replicas �[38;5;243m01/27/23 21:03:00.902�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/27/23 21:03:00.938�[0m Jan 27 21:03:00.938: INFO: RC consumer: stopping metric consumer Jan 27 21:03:00.938: INFO: RC consumer: stopping CPU consumer Jan 27 21:03:00.938: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-8793, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 21:03:10.939�[0m Jan 27 21:03:11.061: INFO: Deleting Deployment.apps consumer took: 37.546672ms Jan 27 21:03:11.161: INFO: Terminating Deployment.apps consumer pods took: 100.788718ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-8793, will wait for the garbage collector to delete the pods �[38;5;243m01/27/23 21:03:13.117�[0m Jan 27 21:03:13.235: INFO: Deleting ReplicationController consumer-ctrl took: 34.934184ms Jan 27 21:03:13.336: INFO: Terminating ReplicationController consumer-ctrl pods took: 101.010312ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 27 21:03:14.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-8793" for this suite. �[38;5;243m01/27/23 21:03:14.938�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute","completed":44,"skipped":3866,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [122.782 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with scale limited by number of Pods rate �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:210�[0m should scale down no more than given number of Pods per minute �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:253�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/27/23 21:01:12.194�[0m Jan 27 21:01:12.194: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/27/23 21:01:12.196�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/27/23 21:01:12.294�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/27/23 21:01:12.355�[0m [It] should scale down no more than given number of Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:253 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/27/23 21:01:12.416�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 6 replicas �[38;5;243m01/27/23 21:01:12.417�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-8793 �[38;5;243m01/27/23 21:01:12.463�[0m I0127 21:01:12.497852 13 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-8793, replica count: 6 I0127 21:01:22.549277 13 runners.go:193] consumer Pods: 6 out of 6 created, 6 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/27/23 21:01:22.549�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-8793 �[38;5;243m01/27/23 21:01:22.6�[0m I0127 21:01:22.638010 13 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-8793, replica count: 1 I0127 21:01:32.689241 13 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 27 21:01:37.691: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 27 21:01:37.723: INFO: RC consumer: consume 660 millicores in total Jan 27 21:01:37.723: INFO: RC consumer: setting consumption to 660 millicores in total Jan 27 21:01:37.723: INFO: RC consumer: sending request to consume 660 millicores Jan 27 21:01:37.723: INFO: RC consumer: consume 0 MB in total Jan 27 21:01:37.723: INFO: RC consumer: disabling mem consumption Jan 27 21:01:37.723: INFO: RC consumer: consume custom metric 0 in total Jan 27 21:01:37.723: INFO: ConsumeCPU URL: {https capz-conf-sz5101-2e915fcb.eastus2.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8793/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=660&requestSizeMillicores=100 } Jan 27 21:01:37.723: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m01/27/23 21:01:37.757ï