Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 2 succeeded |
Started | |
Elapsed | 3h46m |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error: <*errors.withStack | 0xc000536468>: { error: <*errors.withMessage | 0xc0005ac580>{ cause: <*errors.errorString | 0xc000cc07d0>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x3143379, 0x353bac7, 0x18e62fb, 0x18f9df8, 0x147c741], } Unable to run conformance tests: error container run failed with exit code 1 occurred In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/24/23 22:31:47.853
> Enter [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/24/23 19:12:50.689 INFO: Cluster name is capz-conf-a7mu8n STEP: Creating namespace "capz-conf-a7mu8n" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:12:50.689 Jan 24 19:12:50.689: INFO: starting to create namespace for hosting the "capz-conf-a7mu8n" test spec INFO: Creating namespace capz-conf-a7mu8n INFO: Creating event watcher for namespace "capz-conf-a7mu8n" < Exit [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/24/23 19:12:50.737 (48ms) > Enter [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/24/23 19:12:50.737 conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/24/23 19:12:50.737 conformance-tests Name | N | Min | Median | Mean | StdDev | Max ======================================================================================== cluster creation [duration] | 1 | 9m8.6654s | 9m8.6654s | 9m8.6654s | 0s | 9m8.6654s INFO: Creating the workload cluster with name "capz-conf-a7mu8n" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.25.7-rc.0.7+7bbb8f0bf413ea, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-a7mu8n --infrastructure (default) --kubernetes-version v1.25.7-rc.0.7+7bbb8f0bf413ea --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-ci-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/24/23 19:12:54.183 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/24/23 19:14:54.274 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/24/23 19:14:54.274 Jan 24 19:17:10.197: INFO: getting history for release projectcalico Jan 24 19:17:10.306: INFO: Release projectcalico does not exist, installing it Jan 24 19:17:11.680: INFO: creating 1 resource(s) Jan 24 19:17:11.820: INFO: creating 1 resource(s) Jan 24 19:17:11.959: INFO: creating 1 resource(s) Jan 24 19:17:12.085: INFO: creating 1 resource(s) Jan 24 19:17:12.251: INFO: creating 1 resource(s) Jan 24 19:17:12.381: INFO: creating 1 resource(s) Jan 24 19:17:12.683: INFO: creating 1 resource(s) Jan 24 19:17:12.862: INFO: creating 1 resource(s) Jan 24 19:17:12.989: INFO: creating 1 resource(s) Jan 24 19:17:13.127: INFO: creating 1 resource(s) Jan 24 19:17:13.249: INFO: creating 1 resource(s) Jan 24 19:17:13.372: INFO: creating 1 resource(s) Jan 24 19:17:13.530: INFO: creating 1 resource(s) Jan 24 19:17:13.672: INFO: creating 1 resource(s) Jan 24 19:17:13.798: INFO: creating 1 resource(s) Jan 24 19:17:13.940: INFO: creating 1 resource(s) Jan 24 19:17:14.523: INFO: creating 1 resource(s) Jan 24 19:17:14.676: INFO: creating 1 resource(s) Jan 24 19:17:16.429: INFO: creating 1 resource(s) Jan 24 19:17:16.670: INFO: creating 1 resource(s) Jan 24 19:17:18.149: INFO: creating 1 resource(s) Jan 24 19:17:19.196: INFO: Clearing discovery cache Jan 24 19:17:19.196: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 24 19:17:26.549: INFO: creating 1 resource(s) Jan 24 19:17:27.746: INFO: creating 6 resource(s) Jan 24 19:17:29.215: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/24/23 19:17:30.15 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:17:30.59 Jan 24 19:17:30.590: INFO: starting to wait for deployment to become available Jan 24 19:17:41.064: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.474319997s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/24/23 19:17:42.261 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:17:43.231 Jan 24 19:17:43.231: INFO: starting to wait for deployment to become available Jan 24 19:18:54.596: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m11.364705092s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:18:55.761 Jan 24 19:18:55.761: INFO: starting to wait for deployment to become available Jan 24 19:18:55.895: INFO: Deployment calico-system/calico-typha is now available, took 134.408966ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/24/23 19:18:55.895 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:18:56.683 Jan 24 19:18:56.683: INFO: starting to wait for deployment to become available Jan 24 19:19:17.016: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.333062546s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/24/23 19:19:17.016 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:19:17.56 Jan 24 19:19:17.560: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 24 19:19:17.671: INFO: 1 daemonset calico-system/calico-node pods are running, took 110.735161ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:91 @ 01/24/23 19:19:17.671 STEP: waiting for daemonset calico-system/calico-node-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:19:18.209 Jan 24 19:19:18.209: INFO: waiting for daemonset calico-system/calico-node-windows to be complete Jan 24 19:19:18.361: INFO: 0 daemonset calico-system/calico-node-windows pods are running, took 152.267105ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:97 @ 01/24/23 19:19:18.361 STEP: waiting for daemonset kube-system/kube-proxy-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:19:18.811 Jan 24 19:19:18.812: INFO: waiting for daemonset kube-system/kube-proxy-windows to be complete Jan 24 19:19:18.914: INFO: 0 daemonset kube-system/kube-proxy-windows pods are running, took 102.933353ms INFO: Waiting for the first control plane machine managed by capz-conf-a7mu8n/capz-conf-a7mu8n-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/24/23 19:19:18.94 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/24/23 19:19:18.949 Jan 24 19:19:19.072: INFO: getting history for release azuredisk-csi-driver-oot Jan 24 19:19:19.177: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 24 19:19:23.931: INFO: creating 1 resource(s) Jan 24 19:19:24.368: INFO: creating 18 resource(s) Jan 24 19:19:25.256: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/24/23 19:19:25.277 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:19:25.724 Jan 24 19:19:25.724: INFO: starting to wait for deployment to become available Jan 24 19:19:56.167: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 30.443222293s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/24/23 19:19:56.167 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:19:56.712 Jan 24 19:19:56.712: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 24 19:19:56.822: INFO: 1 daemonset kube-system/csi-azuredisk-node pods are running, took 109.816797ms STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/24/23 19:19:57.354 Jan 24 19:19:57.354: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 24 19:19:57.461: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 107.283327ms INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-a7mu8n/capz-conf-a7mu8n-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/24/23 19:19:57.49 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/24/23 19:19:57.504 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/24/23 19:19:57.557 STEP: Checking all the machines controlled by capz-conf-a7mu8n-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/24/23 19:19:57.58 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/24/23 19:19:57.599 STEP: Checking all the machines controlled by capz-conf-a7mu8n-md-win are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/24/23 19:21:57.928 INFO: Waiting for the machine pools to be provisioned INFO: Using repo-list '' for version 'v1.25.7-rc.0.7+7bbb8f0bf413ea' STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-nodes=1" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=2" "-ginkgo.flakeAttempts=0" "-ginkgo.focus=(\\[sig-windows\\]|\\[sig-scheduling\\].SchedulerPreemption|\\[sig-autoscaling\\].\\[Feature:HPA\\]|\\[sig-apps\\].CronJob).*(\\[Serial\\]|\\[Slow\\])|(\\[Serial\\]|\\[Slow\\]).*(\\[Conformance\\]|\\[NodeConformance\\])|\\[sig-api-machinery\\].Garbage.collector" "-ginkgo.skip=\\[LinuxOnly\\]|\\[Excluded:WindowsDocker\\]|device.plugin.for.Windows" "-ginkgo.slow-spec-threshold=120s" "-ginkgo.v=true" "-node-os-distro=windows" "-prepull-images=true" "-disable-log-dump=true" "-dump-logs-on-failure=true" "-ginkgo.progress=true" "-ginkgo.timeout=4h" "-ginkgo.trace=true"] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/24/23 19:22:00.173 I0124 19:22:10.902575 14 e2e.go:116] Starting e2e run "f9b724bc-0186-4d6c-a99f-97b9ecfd5e2e" on Ginkgo node 1 Jan 24 19:22:10.933: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: �[1m1674588130�[0m - will randomize all specs Will run �[1m70�[0m of �[1m7066�[0m specs �[38;5;243m------------------------------�[0m �[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:76�[0m [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 {"msg":"Test Suite starting","completed":0,"skipped":0,"failed":0} Jan 24 19:22:11.424: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 19:22:11.426: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 24 19:22:11.925: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 24 19:22:12.334: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 24 19:22:12.335: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 24 19:22:12.335: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 24 19:22:12.504: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'containerd-logger' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node-win' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-proxy' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-windows' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: Pre-pulling images so that they are cached for the tests. Jan 24 19:22:13.261: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Jan 24 19:22:13.407: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:13.571: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 24 19:22:13.571: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:22:22.714: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:22.874: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 24 19:22:22.874: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:22:31.709: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:31.863: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 24 19:22:31.863: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:22:40.710: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:40.866: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 1 Jan 24 19:22:40.866: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:22:49.709: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:49.867: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 2 Jan 24 19:22:49.867: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Jan 24 19:22:49.867: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Jan 24 19:22:50.001: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:50.156: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2: 2 Jan 24 19:22:50.156: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Jan 24 19:22:50.156: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Jan 24 19:22:50.289: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:50.444: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 1 Jan 24 19:22:50.444: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:22:59.580: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:59.735: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 2 Jan 24 19:22:59.735: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Jan 24 19:22:59.735: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Jan 24 19:22:59.867: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:23:00.019: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2: 2 Jan 24 19:23:00.019: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Jan 24 19:23:00.142: INFO: e2e test version: v1.25.7-rc.0.7+7bbb8f0bf413ea Jan 24 19:23:00.247: INFO: kube-apiserver version: v1.25.7-rc.0.7+7bbb8f0bf413ea [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Jan 24 19:23:00.247: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 19:23:00.351: INFO: Cluster IP family: ipv4 �[38;5;243m------------------------------�[0m �[38;5;10m[SynchronizedBeforeSuite] PASSED [48.928 seconds]�[0m [SynchronizedBeforeSuite] �[38;5;243mtest/e2e/e2e.go:76�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Jan 24 19:22:11.424: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 19:22:11.426: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 24 19:22:11.925: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 24 19:22:12.334: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 24 19:22:12.335: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 24 19:22:12.335: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 24 19:22:12.504: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'containerd-logger' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node-win' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-proxy' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-windows' (0 seconds elapsed) Jan 24 19:22:12.504: INFO: Pre-pulling images so that they are cached for the tests. Jan 24 19:22:13.261: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Jan 24 19:22:13.407: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:13.571: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 24 19:22:13.571: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:22:22.714: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:22.874: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 24 19:22:22.874: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:22:31.709: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:31.863: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Jan 24 19:22:31.863: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:22:40.710: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:40.866: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 1 Jan 24 19:22:40.866: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:22:49.709: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:49.867: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 2 Jan 24 19:22:49.867: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Jan 24 19:22:49.867: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Jan 24 19:22:50.001: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:50.156: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2: 2 Jan 24 19:22:50.156: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Jan 24 19:22:50.156: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Jan 24 19:22:50.289: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:50.444: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 1 Jan 24 19:22:50.444: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:22:59.580: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:22:59.735: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 2 Jan 24 19:22:59.735: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Jan 24 19:22:59.735: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Jan 24 19:22:59.867: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:23:00.019: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2: 2 Jan 24 19:23:00.019: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Jan 24 19:23:00.142: INFO: e2e test version: v1.25.7-rc.0.7+7bbb8f0bf413ea Jan 24 19:23:00.247: INFO: kube-apiserver version: v1.25.7-rc.0.7+7bbb8f0bf413ea [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Jan 24 19:23:00.247: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 19:23:00.351: INFO: Cluster IP family: ipv4 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould not be blocked by dependency circle [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:849�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:23:00.405�[0m Jan 24 19:23:00.405: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/24/23 19:23:00.408�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:23:00.72�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:23:00.923�[0m [It] should not be blocked by dependency circle [Conformance] test/e2e/apimachinery/garbage_collector.go:849 Jan 24 19:23:01.563: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6d9c7920-eed3-40ef-adcc-a432f3e03a01", Controller:(*bool)(0xc002d5a70e), BlockOwnerDeletion:(*bool)(0xc002d5a70f)}} Jan 24 19:23:01.671: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6aa15c3c-f599-448a-bb85-b296e90f89cc", Controller:(*bool)(0xc002d5a9a6), BlockOwnerDeletion:(*bool)(0xc002d5a9a7)}} Jan 24 19:23:01.782: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"869c9fb0-b724-4c5e-af14-a01e61d6ad8d", Controller:(*bool)(0xc002d5ac36), BlockOwnerDeletion:(*bool)(0xc002d5ac37)}} [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 24 19:23:06.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-7967" for this suite. �[38;5;243m01/24/23 19:23:07.126�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","completed":1,"skipped":247,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [6.832 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should not be blocked by dependency circle [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:849�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:23:00.405�[0m Jan 24 19:23:00.405: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/24/23 19:23:00.408�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:23:00.72�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:23:00.923�[0m [It] should not be blocked by dependency circle [Conformance] test/e2e/apimachinery/garbage_collector.go:849 Jan 24 19:23:01.563: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6d9c7920-eed3-40ef-adcc-a432f3e03a01", Controller:(*bool)(0xc002d5a70e), BlockOwnerDeletion:(*bool)(0xc002d5a70f)}} Jan 24 19:23:01.671: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6aa15c3c-f599-448a-bb85-b296e90f89cc", Controller:(*bool)(0xc002d5a9a6), BlockOwnerDeletion:(*bool)(0xc002d5a9a7)}} Jan 24 19:23:01.782: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"869c9fb0-b724-4c5e-af14-a01e61d6ad8d", Controller:(*bool)(0xc002d5ac36), BlockOwnerDeletion:(*bool)(0xc002d5ac37)}} [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 24 19:23:06.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-7967" for this suite. �[38;5;243m01/24/23 19:23:07.126�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould run and stop complex daemon [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:193�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:23:07.241�[0m Jan 24 19:23:07.241: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/24/23 19:23:07.243�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:23:07.554�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:23:07.756�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:193 Jan 24 19:23:08.432: INFO: Creating daemon "daemon-set" with a node selector �[1mSTEP:�[0m Initially, daemon pods should not be running on any nodes. �[38;5;243m01/24/23 19:23:08.541�[0m Jan 24 19:23:08.644: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:08.644: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP:�[0m Change node label to blue, check that daemon pod is launched. �[38;5;243m01/24/23 19:23:08.644�[0m Jan 24 19:23:09.108: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:09.108: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:10.215: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:10.215: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:11.215: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:11.215: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:12.215: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:12.215: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:13.215: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:13.216: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:14.215: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:23:14.215: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set �[1mSTEP:�[0m Update the node label to green, and wait for daemons to be unscheduled �[38;5;243m01/24/23 19:23:14.325�[0m Jan 24 19:23:14.662: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:14.662: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP:�[0m Update DaemonSet node selector to green, and change its update strategy to RollingUpdate �[38;5;243m01/24/23 19:23:14.662�[0m Jan 24 19:23:14.880: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:14.880: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:15.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:15.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:16.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:16.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:17.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:17.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:18.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:18.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:19.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:19.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:20.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:20.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:21.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:21.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:22.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:22.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:23.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:23:23.986: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/24/23 19:23:24.191�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-1859, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:23:24.191�[0m Jan 24 19:23:24.550: INFO: Deleting DaemonSet.extensions daemon-set took: 105.899025ms Jan 24 19:23:24.651: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.138585ms Jan 24 19:23:29.754: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:29.754: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 24 19:23:29.858: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"2484"},"items":null} Jan 24 19:23:29.962: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2485"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 24 19:23:30.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-1859" for this suite. �[38;5;243m01/24/23 19:23:30.554�[0m {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","completed":2,"skipped":285,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [23.423 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should run and stop complex daemon [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:193�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:23:07.241�[0m Jan 24 19:23:07.241: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/24/23 19:23:07.243�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:23:07.554�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:23:07.756�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:193 Jan 24 19:23:08.432: INFO: Creating daemon "daemon-set" with a node selector �[1mSTEP:�[0m Initially, daemon pods should not be running on any nodes. �[38;5;243m01/24/23 19:23:08.541�[0m Jan 24 19:23:08.644: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:08.644: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP:�[0m Change node label to blue, check that daemon pod is launched. �[38;5;243m01/24/23 19:23:08.644�[0m Jan 24 19:23:09.108: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:09.108: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:10.215: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:10.215: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:11.215: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:11.215: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:12.215: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:12.215: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:13.215: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:13.216: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:14.215: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:23:14.215: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set �[1mSTEP:�[0m Update the node label to green, and wait for daemons to be unscheduled �[38;5;243m01/24/23 19:23:14.325�[0m Jan 24 19:23:14.662: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:14.662: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP:�[0m Update DaemonSet node selector to green, and change its update strategy to RollingUpdate �[38;5;243m01/24/23 19:23:14.662�[0m Jan 24 19:23:14.880: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:14.880: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:15.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:15.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:16.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:16.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:17.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:17.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:18.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:18.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:19.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:19.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:20.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:20.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:21.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:21.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:22.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:22.986: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:23:23.986: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:23:23.986: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/24/23 19:23:24.191�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-1859, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:23:24.191�[0m Jan 24 19:23:24.550: INFO: Deleting DaemonSet.extensions daemon-set took: 105.899025ms Jan 24 19:23:24.651: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.138585ms Jan 24 19:23:29.754: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:23:29.754: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 24 19:23:29.858: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"2484"},"items":null} Jan 24 19:23:29.962: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2485"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 24 19:23:30.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-1859" for this suite. �[38;5;243m01/24/23 19:23:30.554�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] StatefulSet �[38;5;243mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mBurst scaling should run to completion even with unhealthy pods [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/apps/statefulset.go:695�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:23:30.665�[0m Jan 24 19:23:30.665: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/24/23 19:23:30.666�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:23:30.975�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:23:31.177�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-1491 �[38;5;243m01/24/23 19:23:31.379�[0m [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/apps/statefulset.go:695 �[1mSTEP:�[0m Creating stateful set ss in namespace statefulset-1491 �[38;5;243m01/24/23 19:23:31.485�[0m �[1mSTEP:�[0m Waiting until all stateful set ss replicas will be running in namespace statefulset-1491 �[38;5;243m01/24/23 19:23:31.594�[0m Jan 24 19:23:31.697: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 19:23:41.803: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Confirming that stateful set scale up will not halt with unhealthy stateful pod �[38;5;243m01/24/23 19:23:41.803�[0m Jan 24 19:23:41.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 19:23:43.539: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 19:23:43.539: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 19:23:43.539: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 19:23:43.644: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 24 19:23:53.752: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 19:23:53.753: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 19:23:54.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999969s Jan 24 19:23:55.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.888077349s Jan 24 19:23:56.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.774303394s Jan 24 19:23:57.516: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.661740292s Jan 24 19:23:58.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.549565087s Jan 24 19:23:59.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.434454167s Jan 24 19:24:00.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.321865223s Jan 24 19:24:01.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.209593658s Jan 24 19:24:03.082: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.095524366s �[1mSTEP:�[0m Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1491 �[38;5;243m01/24/23 19:24:04.082�[0m Jan 24 19:24:04.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 19:24:05.339: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 19:24:05.339: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 19:24:05.339: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 19:24:05.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 19:24:06.487: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 24 19:24:06.487: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 19:24:06.487: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 19:24:06.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 19:24:07.616: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 24 19:24:07.616: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 19:24:07.616: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 19:24:07.728: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 19:24:07.728: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 19:24:07.728: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Scale down will not halt with unhealthy stateful pod �[38;5;243m01/24/23 19:24:07.728�[0m Jan 24 19:24:07.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 19:24:09.008: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 19:24:09.008: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 19:24:09.008: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 19:24:09.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 19:24:10.151: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 19:24:10.151: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 19:24:10.151: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 19:24:10.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 19:24:11.255: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 19:24:11.255: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 19:24:11.255: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 19:24:11.255: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 19:24:11.358: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 24 19:24:21.572: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 19:24:21.572: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 24 19:24:21.572: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 24 19:24:21.902: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:21.902: INFO: ss-0 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC }] Jan 24 19:24:21.902: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:21.902: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:21.902: INFO: Jan 24 19:24:21.902: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 19:24:23.015: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:23.015: INFO: ss-0 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC }] Jan 24 19:24:23.015: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:23.015: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:23.015: INFO: Jan 24 19:24:23.015: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 19:24:24.128: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:24.128: INFO: ss-0 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC }] Jan 24 19:24:24.128: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:24.128: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:24.128: INFO: Jan 24 19:24:24.128: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 19:24:25.242: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:25.242: INFO: ss-0 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC }] Jan 24 19:24:25.242: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:25.242: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:25.242: INFO: Jan 24 19:24:25.242: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 19:24:26.355: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:26.356: INFO: ss-0 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC }] Jan 24 19:24:26.356: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:26.356: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:26.356: INFO: Jan 24 19:24:26.356: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 19:24:27.465: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:27.465: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:27.465: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:27.465: INFO: Jan 24 19:24:27.465: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 24 19:24:28.567: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.317262891s Jan 24 19:24:29.670: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.214866022s Jan 24 19:24:30.773: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.11155146s Jan 24 19:24:31.876: INFO: Verifying statefulset ss doesn't scale past 0 for another 8.94321ms �[1mSTEP:�[0m Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1491 �[38;5;243m01/24/23 19:24:32.877�[0m Jan 24 19:24:32.979: INFO: Scaling statefulset ss to 0 Jan 24 19:24:33.288: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 24 19:24:33.390: INFO: Deleting all statefulset in ns statefulset-1491 Jan 24 19:24:33.492: INFO: Scaling statefulset ss to 0 Jan 24 19:24:33.799: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 19:24:33.901: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 Jan 24 19:24:34.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "statefulset-1491" for this suite. �[38;5;243m01/24/23 19:24:34.339�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","completed":3,"skipped":293,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [63.782 seconds]�[0m [sig-apps] StatefulSet �[38;5;243mtest/e2e/apps/framework.go:23�[0m Basic StatefulSet functionality [StatefulSetBasic] �[38;5;243mtest/e2e/apps/statefulset.go:101�[0m Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] �[38;5;243mtest/e2e/apps/statefulset.go:695�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:23:30.665�[0m Jan 24 19:23:30.665: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/24/23 19:23:30.666�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:23:30.975�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:23:31.177�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-1491 �[38;5;243m01/24/23 19:23:31.379�[0m [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/apps/statefulset.go:695 �[1mSTEP:�[0m Creating stateful set ss in namespace statefulset-1491 �[38;5;243m01/24/23 19:23:31.485�[0m �[1mSTEP:�[0m Waiting until all stateful set ss replicas will be running in namespace statefulset-1491 �[38;5;243m01/24/23 19:23:31.594�[0m Jan 24 19:23:31.697: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 19:23:41.803: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Confirming that stateful set scale up will not halt with unhealthy stateful pod �[38;5;243m01/24/23 19:23:41.803�[0m Jan 24 19:23:41.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 19:23:43.539: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 19:23:43.539: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 19:23:43.539: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 19:23:43.644: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 24 19:23:53.752: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 19:23:53.753: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 19:23:54.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999969s Jan 24 19:23:55.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.888077349s Jan 24 19:23:56.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.774303394s Jan 24 19:23:57.516: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.661740292s Jan 24 19:23:58.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.549565087s Jan 24 19:23:59.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.434454167s Jan 24 19:24:00.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.321865223s Jan 24 19:24:01.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.209593658s Jan 24 19:24:03.082: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.095524366s �[1mSTEP:�[0m Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1491 �[38;5;243m01/24/23 19:24:04.082�[0m Jan 24 19:24:04.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 19:24:05.339: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 19:24:05.339: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 19:24:05.339: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 19:24:05.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 19:24:06.487: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 24 19:24:06.487: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 19:24:06.487: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 19:24:06.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 19:24:07.616: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 24 19:24:07.616: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 19:24:07.616: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 19:24:07.728: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 19:24:07.728: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 19:24:07.728: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Scale down will not halt with unhealthy stateful pod �[38;5;243m01/24/23 19:24:07.728�[0m Jan 24 19:24:07.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 19:24:09.008: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 19:24:09.008: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 19:24:09.008: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 19:24:09.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 19:24:10.151: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 19:24:10.151: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 19:24:10.151: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 19:24:10.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1491 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 19:24:11.255: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 19:24:11.255: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 19:24:11.255: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 19:24:11.255: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 19:24:11.358: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 24 19:24:21.572: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 19:24:21.572: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 24 19:24:21.572: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 24 19:24:21.902: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:21.902: INFO: ss-0 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC }] Jan 24 19:24:21.902: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:21.902: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:21.902: INFO: Jan 24 19:24:21.902: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 19:24:23.015: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:23.015: INFO: ss-0 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC }] Jan 24 19:24:23.015: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:23.015: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:23.015: INFO: Jan 24 19:24:23.015: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 19:24:24.128: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:24.128: INFO: ss-0 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC }] Jan 24 19:24:24.128: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:24.128: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:24.128: INFO: Jan 24 19:24:24.128: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 19:24:25.242: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:25.242: INFO: ss-0 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC }] Jan 24 19:24:25.242: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:25.242: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:25.242: INFO: Jan 24 19:24:25.242: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 19:24:26.355: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:26.356: INFO: ss-0 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:31 +0000 UTC }] Jan 24 19:24:26.356: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:26.356: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:26.356: INFO: Jan 24 19:24:26.356: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 19:24:27.465: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 19:24:27.465: INFO: ss-1 capz-conf-s4kcn Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:27.465: INFO: ss-2 capz-conf-jzg2c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:24:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 19:23:54 +0000 UTC }] Jan 24 19:24:27.465: INFO: Jan 24 19:24:27.465: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 24 19:24:28.567: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.317262891s Jan 24 19:24:29.670: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.214866022s Jan 24 19:24:30.773: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.11155146s Jan 24 19:24:31.876: INFO: Verifying statefulset ss doesn't scale past 0 for another 8.94321ms �[1mSTEP:�[0m Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1491 �[38;5;243m01/24/23 19:24:32.877�[0m Jan 24 19:24:32.979: INFO: Scaling statefulset ss to 0 Jan 24 19:24:33.288: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 24 19:24:33.390: INFO: Deleting all statefulset in ns statefulset-1491 Jan 24 19:24:33.492: INFO: Scaling statefulset ss to 0 Jan 24 19:24:33.799: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 19:24:33.901: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 Jan 24 19:24:34.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "statefulset-1491" for this suite. �[38;5;243m01/24/23 19:24:34.339�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould update pod when spec was updated and update strategy is RollingUpdate [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:373�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:24:34.45�[0m Jan 24 19:24:34.450: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/24/23 19:24:34.452�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:24:34.766�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:24:34.968�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:373 Jan 24 19:24:35.624: INFO: Creating simple daemon set daemon-set �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/24/23 19:24:35.731�[0m Jan 24 19:24:35.859: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:35.968: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:24:35.968: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:37.096: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:37.204: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:24:37.204: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:38.096: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:38.204: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:24:38.205: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:39.097: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:39.205: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:24:39.206: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:40.097: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:40.206: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:24:40.206: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:41.097: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:41.205: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:24:41.205: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:24:42.096: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:42.205: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 19:24:42.205: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Update daemon pods image. �[38;5;243m01/24/23 19:24:42.621�[0m �[1mSTEP:�[0m Check that daemon pods images are updated. �[38;5;243m01/24/23 19:24:42.86�[0m Jan 24 19:24:42.971: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:43.099: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:44.209: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:44.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:45.220: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:45.348: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:46.209: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:46.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:47.208: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:47.338: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:48.209: INFO: Pod daemon-set-22dvm is not available Jan 24 19:24:48.209: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:48.339: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:49.209: INFO: Pod daemon-set-22dvm is not available Jan 24 19:24:49.209: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:49.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:50.208: INFO: Pod daemon-set-22dvm is not available Jan 24 19:24:50.208: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:50.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:51.209: INFO: Pod daemon-set-22dvm is not available Jan 24 19:24:51.209: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:51.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:52.213: INFO: Pod daemon-set-22dvm is not available Jan 24 19:24:52.213: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:52.340: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:53.339: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:54.339: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:55.344: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:56.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:57.209: INFO: Pod daemon-set-7xxtg is not available Jan 24 19:24:57.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node �[1mSTEP:�[0m Check that daemon pods are still running on every node of the cluster. �[38;5;243m01/24/23 19:24:57.337�[0m Jan 24 19:24:57.464: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:57.573: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:24:57.573: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:58.702: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:58.816: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:24:58.817: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:59.707: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:59.815: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:24:59.815: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:25:00.701: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:25:00.811: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 19:25:00.811: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/24/23 19:25:01.33�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-1381, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:25:01.33�[0m Jan 24 19:25:01.689: INFO: Deleting DaemonSet.extensions daemon-set took: 106.15029ms Jan 24 19:25:01.790: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.681443ms Jan 24 19:25:05.691: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:25:05.691: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 24 19:25:05.793: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3109"},"items":null} Jan 24 19:25:05.896: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3109"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 24 19:25:06.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-1381" for this suite. �[38;5;243m01/24/23 19:25:06.368�[0m {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","completed":4,"skipped":320,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [32.027 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should update pod when spec was updated and update strategy is RollingUpdate [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:373�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:24:34.45�[0m Jan 24 19:24:34.450: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/24/23 19:24:34.452�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:24:34.766�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:24:34.968�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:373 Jan 24 19:24:35.624: INFO: Creating simple daemon set daemon-set �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/24/23 19:24:35.731�[0m Jan 24 19:24:35.859: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:35.968: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:24:35.968: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:37.096: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:37.204: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:24:37.204: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:38.096: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:38.204: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:24:38.205: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:39.097: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:39.205: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:24:39.206: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:40.097: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:40.206: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:24:40.206: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:41.097: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:41.205: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:24:41.205: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:24:42.096: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:42.205: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 19:24:42.205: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Update daemon pods image. �[38;5;243m01/24/23 19:24:42.621�[0m �[1mSTEP:�[0m Check that daemon pods images are updated. �[38;5;243m01/24/23 19:24:42.86�[0m Jan 24 19:24:42.971: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:43.099: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:44.209: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:44.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:45.220: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:45.348: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:46.209: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:46.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:47.208: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:47.338: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:48.209: INFO: Pod daemon-set-22dvm is not available Jan 24 19:24:48.209: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:48.339: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:49.209: INFO: Pod daemon-set-22dvm is not available Jan 24 19:24:49.209: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:49.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:50.208: INFO: Pod daemon-set-22dvm is not available Jan 24 19:24:50.208: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:50.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:51.209: INFO: Pod daemon-set-22dvm is not available Jan 24 19:24:51.209: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:51.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:52.213: INFO: Pod daemon-set-22dvm is not available Jan 24 19:24:52.213: INFO: Wrong image for pod: daemon-set-kqvkd. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Jan 24 19:24:52.340: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:53.339: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:54.339: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:55.344: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:56.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:57.209: INFO: Pod daemon-set-7xxtg is not available Jan 24 19:24:57.336: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node �[1mSTEP:�[0m Check that daemon pods are still running on every node of the cluster. �[38;5;243m01/24/23 19:24:57.337�[0m Jan 24 19:24:57.464: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:57.573: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:24:57.573: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:58.702: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:58.816: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:24:58.817: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:24:59.707: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:24:59.815: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:24:59.815: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:25:00.701: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:25:00.811: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 19:25:00.811: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/24/23 19:25:01.33�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-1381, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:25:01.33�[0m Jan 24 19:25:01.689: INFO: Deleting DaemonSet.extensions daemon-set took: 106.15029ms Jan 24 19:25:01.790: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.681443ms Jan 24 19:25:05.691: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:25:05.691: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 24 19:25:05.793: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3109"},"items":null} Jan 24 19:25:05.896: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3109"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 24 19:25:06.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-1381" for this suite. �[38;5;243m01/24/23 19:25:06.368�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould cap back-off at MaxContainerBackOff [Slow][NodeConformance]�[0m �[38;5;243mtest/e2e/common/node/pods.go:716�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:25:06.479�[0m Jan 24 19:25:06.479: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m01/24/23 19:25:06.481�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:25:06.797�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:25:06.999�[0m [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:193 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:716 Jan 24 19:25:07.312: INFO: Waiting up to 5m0s for pod "back-off-cap" in namespace "pods-9169" to be "running and ready" Jan 24 19:25:07.414: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 102.111465ms Jan 24 19:25:07.414: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 24 19:25:09.518: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205696546s Jan 24 19:25:09.518: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 24 19:25:11.518: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205387408s Jan 24 19:25:11.518: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 24 19:25:13.524: INFO: Pod "back-off-cap": Phase="Running", Reason="", readiness=true. Elapsed: 6.211811816s Jan 24 19:25:13.524: INFO: The phase of Pod back-off-cap is Running (Ready = true) Jan 24 19:25:13.524: INFO: Pod "back-off-cap" satisfied condition "running and ready" �[1mSTEP:�[0m getting restart delay when capped �[38;5;243m01/24/23 19:35:13.632�[0m Jan 24 19:36:36.070: INFO: getRestartDelay: restartCount = 7, finishedAt=2023-01-24 19:31:32 +0000 UTC restartedAt=2023-01-24 19:36:34 +0000 UTC (5m2s) Jan 24 19:41:52.655: INFO: getRestartDelay: restartCount = 8, finishedAt=2023-01-24 19:36:40 +0000 UTC restartedAt=2023-01-24 19:41:50 +0000 UTC (5m10s) Jan 24 19:47:05.787: INFO: getRestartDelay: restartCount = 9, finishedAt=2023-01-24 19:41:55 +0000 UTC restartedAt=2023-01-24 19:47:03 +0000 UTC (5m8s) �[1mSTEP:�[0m getting restart delay after a capped delay �[38;5;243m01/24/23 19:47:05.787�[0m Jan 24 19:52:20.062: INFO: getRestartDelay: restartCount = 10, finishedAt=2023-01-24 19:47:08 +0000 UTC restartedAt=2023-01-24 19:52:17 +0000 UTC (5m9s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:187 Jan 24 19:52:20.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "pods-9169" for this suite. �[38;5;243m01/24/23 19:52:20.222�[0m {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","completed":5,"skipped":329,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [1633.848 seconds]�[0m [sig-node] Pods �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should cap back-off at MaxContainerBackOff [Slow][NodeConformance] �[38;5;243mtest/e2e/common/node/pods.go:716�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:25:06.479�[0m Jan 24 19:25:06.479: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m01/24/23 19:25:06.481�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:25:06.797�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:25:06.999�[0m [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:193 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:716 Jan 24 19:25:07.312: INFO: Waiting up to 5m0s for pod "back-off-cap" in namespace "pods-9169" to be "running and ready" Jan 24 19:25:07.414: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 102.111465ms Jan 24 19:25:07.414: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 24 19:25:09.518: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205696546s Jan 24 19:25:09.518: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 24 19:25:11.518: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205387408s Jan 24 19:25:11.518: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 24 19:25:13.524: INFO: Pod "back-off-cap": Phase="Running", Reason="", readiness=true. Elapsed: 6.211811816s Jan 24 19:25:13.524: INFO: The phase of Pod back-off-cap is Running (Ready = true) Jan 24 19:25:13.524: INFO: Pod "back-off-cap" satisfied condition "running and ready" �[1mSTEP:�[0m getting restart delay when capped �[38;5;243m01/24/23 19:35:13.632�[0m Jan 24 19:36:36.070: INFO: getRestartDelay: restartCount = 7, finishedAt=2023-01-24 19:31:32 +0000 UTC restartedAt=2023-01-24 19:36:34 +0000 UTC (5m2s) Jan 24 19:41:52.655: INFO: getRestartDelay: restartCount = 8, finishedAt=2023-01-24 19:36:40 +0000 UTC restartedAt=2023-01-24 19:41:50 +0000 UTC (5m10s) Jan 24 19:47:05.787: INFO: getRestartDelay: restartCount = 9, finishedAt=2023-01-24 19:41:55 +0000 UTC restartedAt=2023-01-24 19:47:03 +0000 UTC (5m8s) �[1mSTEP:�[0m getting restart delay after a capped delay �[38;5;243m01/24/23 19:47:05.787�[0m Jan 24 19:52:20.062: INFO: getRestartDelay: restartCount = 10, finishedAt=2023-01-24 19:47:08 +0000 UTC restartedAt=2023-01-24 19:52:17 +0000 UTC (5m9s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:187 Jan 24 19:52:20.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "pods-9169" for this suite. �[38;5;243m01/24/23 19:52:20.222�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould rollback without unnecessary restarts [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:431�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:52:20.33�[0m Jan 24 19:52:20.330: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/24/23 19:52:20.332�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:52:20.642�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:52:20.845�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:431 Jan 24 19:52:21.685: INFO: Create a RollingUpdate DaemonSet Jan 24 19:52:21.791: INFO: Check that daemon pods launch on every node of the cluster Jan 24 19:52:21.939: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:22.054: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:22.054: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:23.202: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:23.317: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:23.317: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:24.203: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:24.318: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:24.318: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:25.203: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:25.321: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:25.321: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:26.208: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:26.323: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:52:26.323: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:52:27.202: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:27.318: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:52:27.318: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:52:28.202: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:28.317: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:52:28.317: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:52:29.203: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:29.318: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:52:29.319: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:52:30.203: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:30.318: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 19:52:30.318: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set Jan 24 19:52:30.318: INFO: Update the DaemonSet to trigger a rollout Jan 24 19:52:30.531: INFO: Updating DaemonSet daemon-set Jan 24 19:52:38.027: INFO: Roll back the DaemonSet before rollout is complete Jan 24 19:52:38.238: INFO: Updating DaemonSet daemon-set Jan 24 19:52:38.238: INFO: Make sure DaemonSet rollback is complete Jan 24 19:52:38.500: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:39.766: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:40.765: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:41.763: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:42.765: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:43.619: INFO: Pod daemon-set-xk542 is not available Jan 24 19:52:43.767: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/24/23 19:52:43.986�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-7487, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:52:43.986�[0m Jan 24 19:52:44.347: INFO: Deleting DaemonSet.extensions daemon-set took: 108.021458ms Jan 24 19:52:44.448: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.509807ms Jan 24 19:52:53.251: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:53.251: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 24 19:52:53.353: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"8111"},"items":null} Jan 24 19:52:53.455: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"8111"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 24 19:52:53.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-7487" for this suite. �[38;5;243m01/24/23 19:52:53.98�[0m {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","completed":6,"skipped":367,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [33.755 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should rollback without unnecessary restarts [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:431�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:52:20.33�[0m Jan 24 19:52:20.330: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/24/23 19:52:20.332�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:52:20.642�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:52:20.845�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:431 Jan 24 19:52:21.685: INFO: Create a RollingUpdate DaemonSet Jan 24 19:52:21.791: INFO: Check that daemon pods launch on every node of the cluster Jan 24 19:52:21.939: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:22.054: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:22.054: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:23.202: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:23.317: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:23.317: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:24.203: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:24.318: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:24.318: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:25.203: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:25.321: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:25.321: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:26.208: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:26.323: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:52:26.323: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:52:27.202: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:27.318: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:52:27.318: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:52:28.202: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:28.317: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:52:28.317: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:52:29.203: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:29.318: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:52:29.319: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:52:30.203: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:30.318: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 19:52:30.318: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set Jan 24 19:52:30.318: INFO: Update the DaemonSet to trigger a rollout Jan 24 19:52:30.531: INFO: Updating DaemonSet daemon-set Jan 24 19:52:38.027: INFO: Roll back the DaemonSet before rollout is complete Jan 24 19:52:38.238: INFO: Updating DaemonSet daemon-set Jan 24 19:52:38.238: INFO: Make sure DaemonSet rollback is complete Jan 24 19:52:38.500: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:39.766: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:40.765: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:41.763: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:42.765: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:43.619: INFO: Pod daemon-set-xk542 is not available Jan 24 19:52:43.767: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/24/23 19:52:43.986�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-7487, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:52:43.986�[0m Jan 24 19:52:44.347: INFO: Deleting DaemonSet.extensions daemon-set took: 108.021458ms Jan 24 19:52:44.448: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.509807ms Jan 24 19:52:53.251: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:53.251: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 24 19:52:53.353: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"8111"},"items":null} Jan 24 19:52:53.455: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"8111"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 24 19:52:53.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-7487" for this suite. �[38;5;243m01/24/23 19:52:53.98�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould run and stop simple daemon [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:165�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:52:54.086�[0m Jan 24 19:52:54.086: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/24/23 19:52:54.087�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:52:54.401�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:52:54.605�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:165 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m01/24/23 19:52:55.293�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/24/23 19:52:55.401�[0m Jan 24 19:52:55.549: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:55.663: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:55.663: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:56.811: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:56.926: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:56.926: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:57.817: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:57.930: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:57.931: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:58.812: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:58.926: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:58.926: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:59.811: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:59.926: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:52:59.926: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:00.812: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:00.927: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:00.927: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:01.812: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:01.927: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:01.927: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:02.811: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:02.925: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 19:53:02.925: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Stop a daemon pod, check that the daemon pod is revived. �[38;5;243m01/24/23 19:53:03.028�[0m Jan 24 19:53:03.407: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:03.522: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:03.522: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:04.670: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:04.784: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:04.784: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:05.672: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:05.786: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:05.786: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:06.672: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:06.786: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:06.786: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:07.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:07.785: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:07.786: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:08.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:08.785: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:08.785: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:09.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:09.785: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:09.785: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:10.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:10.785: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:10.785: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:11.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:11.793: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:11.793: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:12.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:12.786: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:12.786: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:13.672: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:13.792: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:13.792: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:14.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:14.785: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:14.785: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:15.672: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:15.787: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 19:53:15.787: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/24/23 19:53:15.889�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-6979, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:53:15.889�[0m Jan 24 19:53:16.249: INFO: Deleting DaemonSet.extensions daemon-set took: 106.470292ms Jan 24 19:53:16.350: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.813201ms Jan 24 19:53:22.554: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:53:22.554: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 24 19:53:22.656: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"8298"},"items":null} Jan 24 19:53:22.765: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"8299"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 24 19:53:23.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-6979" for this suite. �[38;5;243m01/24/23 19:53:23.303�[0m {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","completed":7,"skipped":375,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [29.326 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should run and stop simple daemon [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:165�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:52:54.086�[0m Jan 24 19:52:54.086: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/24/23 19:52:54.087�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:52:54.401�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:52:54.605�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:165 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m01/24/23 19:52:55.293�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/24/23 19:52:55.401�[0m Jan 24 19:52:55.549: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:55.663: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:55.663: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:56.811: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:56.926: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:56.926: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:57.817: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:57.930: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:57.931: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:58.812: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:58.926: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:52:58.926: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 19:52:59.811: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:52:59.926: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:52:59.926: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:00.812: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:00.927: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:00.927: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:01.812: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:01.927: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:01.927: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:02.811: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:02.925: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 19:53:02.925: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Stop a daemon pod, check that the daemon pod is revived. �[38;5;243m01/24/23 19:53:03.028�[0m Jan 24 19:53:03.407: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:03.522: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:03.522: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:04.670: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:04.784: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:04.784: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:05.672: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:05.786: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:05.786: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:06.672: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:06.786: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:06.786: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:07.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:07.785: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:07.786: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:08.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:08.785: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:08.785: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:09.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:09.785: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:09.785: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:10.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:10.785: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:10.785: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:11.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:11.793: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:11.793: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:12.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:12.786: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:12.786: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:13.672: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:13.792: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:13.792: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:14.671: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:14.785: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 19:53:14.785: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 19:53:15.672: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 19:53:15.787: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 19:53:15.787: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m01/24/23 19:53:15.889�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-6979, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:53:15.889�[0m Jan 24 19:53:16.249: INFO: Deleting DaemonSet.extensions daemon-set took: 106.470292ms Jan 24 19:53:16.350: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.813201ms Jan 24 19:53:22.554: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 19:53:22.554: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 24 19:53:22.656: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"8298"},"items":null} Jan 24 19:53:22.765: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"8299"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 24 19:53:23.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-6979" for this suite. �[38;5;243m01/24/23 19:53:23.303�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith long upscale stabilization window�[0m �[1mshould scale up only after the stabilization period�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:96�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:53:23.416�[0m Jan 24 19:53:23.416: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 19:53:23.418�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:53:23.73�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:53:23.932�[0m [It] should scale up only after the stabilization period test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:96 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/24/23 19:53:24.143�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 2 replicas �[38;5;243m01/24/23 19:53:24.143�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-8663 �[38;5;243m01/24/23 19:53:24.263�[0m I0124 19:53:24.370499 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-8663, replica count: 2 I0124 19:53:34.521786 14 runners.go:193] consumer Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 19:53:34.521�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-8663 �[38;5;243m01/24/23 19:53:34.642�[0m I0124 19:53:34.750345 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-8663, replica count: 1 I0124 19:53:44.902761 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 19:53:49.903: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 24 19:53:50.005: INFO: RC consumer: consume 220 millicores in total Jan 24 19:53:50.005: INFO: RC consumer: consume 0 MB in total Jan 24 19:53:50.005: INFO: RC consumer: setting consumption to 220 millicores in total Jan 24 19:53:50.005: INFO: RC consumer: disabling mem consumption Jan 24 19:53:50.005: INFO: RC consumer: consume custom metric 0 in total Jan 24 19:53:50.006: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m triggering scale down to record a recommendation �[38;5;243m01/24/23 19:53:50.114�[0m Jan 24 19:53:50.115: INFO: RC consumer: consume 110 millicores in total Jan 24 19:53:50.115: INFO: RC consumer: setting consumption to 110 millicores in total Jan 24 19:53:50.217: INFO: waiting for 1 replicas (current: 2) Jan 24 19:54:10.324: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m01/24/23 19:54:10.324�[0m Jan 24 19:54:10.324: INFO: RC consumer: consume 330 millicores in total Jan 24 19:54:10.325: INFO: RC consumer: setting consumption to 330 millicores in total Jan 24 19:54:10.427: INFO: waiting for 3 replicas (current: 1) Jan 24 19:54:20.006: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:54:20.006: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:54:30.534: INFO: waiting for 3 replicas (current: 1) Jan 24 19:54:50.529: INFO: waiting for 3 replicas (current: 1) Jan 24 19:54:56.161: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:54:56.161: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:55:10.530: INFO: waiting for 3 replicas (current: 1) Jan 24 19:55:26.274: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:55:26.274: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:55:30.530: INFO: waiting for 3 replicas (current: 1) Jan 24 19:55:50.530: INFO: waiting for 3 replicas (current: 1) Jan 24 19:55:56.389: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:55:56.389: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:56:10.531: INFO: waiting for 3 replicas (current: 1) Jan 24 19:56:26.501: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:56:26.501: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:56:30.530: INFO: waiting for 3 replicas (current: 1) Jan 24 19:56:50.530: INFO: waiting for 3 replicas (current: 1) Jan 24 19:56:56.615: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:56:56.615: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:57:10.533: INFO: waiting for 3 replicas (current: 1) Jan 24 19:57:26.735: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:57:26.735: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:57:30.533: INFO: waiting for 3 replicas (current: 1) Jan 24 19:57:50.531: INFO: waiting for 3 replicas (current: 3) �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m01/24/23 19:57:50.531�[0m Jan 24 19:57:50.531: INFO: time waited for scale up: 3m40.206328153s �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/24/23 19:57:50.638�[0m Jan 24 19:57:50.638: INFO: RC consumer: stopping metric consumer Jan 24 19:57:50.638: INFO: RC consumer: stopping mem consumer Jan 24 19:57:50.639: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-8663, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:58:00.64�[0m Jan 24 19:58:01.002: INFO: Deleting Deployment.apps consumer took: 108.578906ms Jan 24 19:58:01.102: INFO: Terminating Deployment.apps consumer pods took: 100.175479ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-8663, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:58:03.938�[0m Jan 24 19:58:04.298: INFO: Deleting ReplicationController consumer-ctrl took: 107.056245ms Jan 24 19:58:04.398: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.431428ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 24 19:58:06.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-8663" for this suite. �[38;5;243m01/24/23 19:58:06.611�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period","completed":8,"skipped":412,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [283.310 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with long upscale stabilization window �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:95�[0m should scale up only after the stabilization period �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:96�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:53:23.416�[0m Jan 24 19:53:23.416: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 19:53:23.418�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:53:23.73�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:53:23.932�[0m [It] should scale up only after the stabilization period test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:96 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/24/23 19:53:24.143�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 2 replicas �[38;5;243m01/24/23 19:53:24.143�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-8663 �[38;5;243m01/24/23 19:53:24.263�[0m I0124 19:53:24.370499 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-8663, replica count: 2 I0124 19:53:34.521786 14 runners.go:193] consumer Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 19:53:34.521�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-8663 �[38;5;243m01/24/23 19:53:34.642�[0m I0124 19:53:34.750345 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-8663, replica count: 1 I0124 19:53:44.902761 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 19:53:49.903: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 24 19:53:50.005: INFO: RC consumer: consume 220 millicores in total Jan 24 19:53:50.005: INFO: RC consumer: consume 0 MB in total Jan 24 19:53:50.005: INFO: RC consumer: setting consumption to 220 millicores in total Jan 24 19:53:50.005: INFO: RC consumer: disabling mem consumption Jan 24 19:53:50.005: INFO: RC consumer: consume custom metric 0 in total Jan 24 19:53:50.006: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m triggering scale down to record a recommendation �[38;5;243m01/24/23 19:53:50.114�[0m Jan 24 19:53:50.115: INFO: RC consumer: consume 110 millicores in total Jan 24 19:53:50.115: INFO: RC consumer: setting consumption to 110 millicores in total Jan 24 19:53:50.217: INFO: waiting for 1 replicas (current: 2) Jan 24 19:54:10.324: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m01/24/23 19:54:10.324�[0m Jan 24 19:54:10.324: INFO: RC consumer: consume 330 millicores in total Jan 24 19:54:10.325: INFO: RC consumer: setting consumption to 330 millicores in total Jan 24 19:54:10.427: INFO: waiting for 3 replicas (current: 1) Jan 24 19:54:20.006: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:54:20.006: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:54:30.534: INFO: waiting for 3 replicas (current: 1) Jan 24 19:54:50.529: INFO: waiting for 3 replicas (current: 1) Jan 24 19:54:56.161: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:54:56.161: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:55:10.530: INFO: waiting for 3 replicas (current: 1) Jan 24 19:55:26.274: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:55:26.274: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:55:30.530: INFO: waiting for 3 replicas (current: 1) Jan 24 19:55:50.530: INFO: waiting for 3 replicas (current: 1) Jan 24 19:55:56.389: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:55:56.389: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:56:10.531: INFO: waiting for 3 replicas (current: 1) Jan 24 19:56:26.501: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:56:26.501: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:56:30.530: INFO: waiting for 3 replicas (current: 1) Jan 24 19:56:50.530: INFO: waiting for 3 replicas (current: 1) Jan 24 19:56:56.615: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:56:56.615: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:57:10.533: INFO: waiting for 3 replicas (current: 1) Jan 24 19:57:26.735: INFO: RC consumer: sending request to consume 330 millicores Jan 24 19:57:26.735: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8663/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 19:57:30.533: INFO: waiting for 3 replicas (current: 1) Jan 24 19:57:50.531: INFO: waiting for 3 replicas (current: 3) �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m01/24/23 19:57:50.531�[0m Jan 24 19:57:50.531: INFO: time waited for scale up: 3m40.206328153s �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/24/23 19:57:50.638�[0m Jan 24 19:57:50.638: INFO: RC consumer: stopping metric consumer Jan 24 19:57:50.638: INFO: RC consumer: stopping mem consumer Jan 24 19:57:50.639: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-8663, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:58:00.64�[0m Jan 24 19:58:01.002: INFO: Deleting Deployment.apps consumer took: 108.578906ms Jan 24 19:58:01.102: INFO: Terminating Deployment.apps consumer pods took: 100.175479ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-8663, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 19:58:03.938�[0m Jan 24 19:58:04.298: INFO: Deleting ReplicationController consumer-ctrl took: 107.056245ms Jan 24 19:58:04.398: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.431428ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 24 19:58:06.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-8663" for this suite. �[38;5;243m01/24/23 19:58:06.611�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould fail substituting values in a volume subpath with absolute path [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/common/node/expansion.go:185�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:58:06.728�[0m Jan 24 19:58:06.728: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/24/23 19:58:06.729�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:58:07.04�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:58:07.243�[0m [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:185 Jan 24 19:58:07.558: INFO: Waiting up to 2m0s for pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d" in namespace "var-expansion-2951" to be "container 0 failed with reason CreateContainerConfigError" Jan 24 19:58:07.662: INFO: Pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d": Phase="Pending", Reason="", readiness=false. Elapsed: 103.28262ms Jan 24 19:58:09.772: INFO: Pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213105758s Jan 24 19:58:11.770: INFO: Pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211752402s Jan 24 19:58:11.770: INFO: Pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d" satisfied condition "container 0 failed with reason CreateContainerConfigError" Jan 24 19:58:11.770: INFO: Deleting pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d" in namespace "var-expansion-2951" Jan 24 19:58:11.889: INFO: Wait up to 5m0s for pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 24 19:58:16.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-2951" for this suite. �[38;5;243m01/24/23 19:58:16.253�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","completed":9,"skipped":430,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [9.639 seconds]�[0m [sig-node] Variable Expansion �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] �[38;5;243mtest/e2e/common/node/expansion.go:185�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:58:06.728�[0m Jan 24 19:58:06.728: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/24/23 19:58:06.729�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:58:07.04�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:58:07.243�[0m [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:185 Jan 24 19:58:07.558: INFO: Waiting up to 2m0s for pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d" in namespace "var-expansion-2951" to be "container 0 failed with reason CreateContainerConfigError" Jan 24 19:58:07.662: INFO: Pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d": Phase="Pending", Reason="", readiness=false. Elapsed: 103.28262ms Jan 24 19:58:09.772: INFO: Pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213105758s Jan 24 19:58:11.770: INFO: Pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211752402s Jan 24 19:58:11.770: INFO: Pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d" satisfied condition "container 0 failed with reason CreateContainerConfigError" Jan 24 19:58:11.770: INFO: Deleting pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d" in namespace "var-expansion-2951" Jan 24 19:58:11.889: INFO: Wait up to 5m0s for pod "var-expansion-bb212096-faf6-46b7-811b-0c15b2d6ba2d" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 24 19:58:16.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-2951" for this suite. �[38;5;243m01/24/23 19:58:16.253�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates basic preemption works [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:125�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:58:16.372�[0m Jan 24 19:58:16.372: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/24/23 19:58:16.373�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:58:16.694�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:58:16.897�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 24 19:58:17.420: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 19:59:18.437: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:125 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m01/24/23 19:59:18.544�[0m Jan 24 19:59:18.781: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 24 19:59:18.891: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 24 19:59:19.119: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 24 19:59:19.227: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m01/24/23 19:59:19.228�[0m Jan 24 19:59:19.228: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-6440" to be "running" Jan 24 19:59:19.330: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 102.341046ms Jan 24 19:59:21.437: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209009411s Jan 24 19:59:23.442: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213790274s Jan 24 19:59:25.438: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210039736s Jan 24 19:59:27.438: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210048864s Jan 24 19:59:29.438: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.210419736s Jan 24 19:59:31.437: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 12.209281414s Jan 24 19:59:33.437: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 14.209411245s Jan 24 19:59:35.438: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 16.210047112s Jan 24 19:59:37.437: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 18.209552997s Jan 24 19:59:39.439: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 20.211162088s Jan 24 19:59:39.439: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Jan 24 19:59:39.439: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-6440" to be "running" Jan 24 19:59:39.545: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 106.312025ms Jan 24 19:59:39.545: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Jan 24 19:59:39.545: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-6440" to be "running" Jan 24 19:59:39.652: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 106.287729ms Jan 24 19:59:41.759: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214058686s Jan 24 19:59:43.759: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.214120915s Jan 24 19:59:43.759: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Jan 24 19:59:43.760: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-6440" to be "running" Jan 24 19:59:43.866: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 106.793653ms Jan 24 19:59:45.973: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.213444113s Jan 24 19:59:45.973: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a high priority pod that has same requirements as that of lower priority pod �[38;5;243m01/24/23 19:59:45.973�[0m Jan 24 19:59:46.079: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-6440" to be "running" Jan 24 19:59:46.181: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 102.309401ms Jan 24 19:59:48.284: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204983266s Jan 24 19:59:50.288: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209215668s Jan 24 19:59:52.289: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209998815s Jan 24 19:59:54.289: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.210375892s Jan 24 19:59:54.289: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 24 19:59:54.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-6440" for this suite. �[38;5;243m01/24/23 19:59:54.949�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","completed":10,"skipped":528,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [99.231 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates basic preemption works [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:125�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:58:16.372�[0m Jan 24 19:58:16.372: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/24/23 19:58:16.373�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:58:16.694�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:58:16.897�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 24 19:58:17.420: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 19:59:18.437: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:125 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m01/24/23 19:59:18.544�[0m Jan 24 19:59:18.781: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 24 19:59:18.891: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 24 19:59:19.119: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 24 19:59:19.227: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m01/24/23 19:59:19.228�[0m Jan 24 19:59:19.228: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-6440" to be "running" Jan 24 19:59:19.330: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 102.341046ms Jan 24 19:59:21.437: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209009411s Jan 24 19:59:23.442: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213790274s Jan 24 19:59:25.438: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210039736s Jan 24 19:59:27.438: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210048864s Jan 24 19:59:29.438: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.210419736s Jan 24 19:59:31.437: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 12.209281414s Jan 24 19:59:33.437: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 14.209411245s Jan 24 19:59:35.438: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 16.210047112s Jan 24 19:59:37.437: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 18.209552997s Jan 24 19:59:39.439: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 20.211162088s Jan 24 19:59:39.439: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Jan 24 19:59:39.439: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-6440" to be "running" Jan 24 19:59:39.545: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 106.312025ms Jan 24 19:59:39.545: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Jan 24 19:59:39.545: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-6440" to be "running" Jan 24 19:59:39.652: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 106.287729ms Jan 24 19:59:41.759: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214058686s Jan 24 19:59:43.759: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.214120915s Jan 24 19:59:43.759: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Jan 24 19:59:43.760: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-6440" to be "running" Jan 24 19:59:43.866: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 106.793653ms Jan 24 19:59:45.973: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.213444113s Jan 24 19:59:45.973: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a high priority pod that has same requirements as that of lower priority pod �[38;5;243m01/24/23 19:59:45.973�[0m Jan 24 19:59:46.079: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-6440" to be "running" Jan 24 19:59:46.181: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 102.309401ms Jan 24 19:59:48.284: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204983266s Jan 24 19:59:50.288: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209215668s Jan 24 19:59:52.289: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209998815s Jan 24 19:59:54.289: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.210375892s Jan 24 19:59:54.289: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 24 19:59:54.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-6440" for this suite. �[38;5;243m01/24/23 19:59:54.949�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[38;5;243mGMSA support�[0m �[1mcan read and write file to remote SMB folder�[0m �[38;5;243mtest/e2e/windows/gmsa_full.go:167�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:59:55.605�[0m Jan 24 19:59:55.605: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-full-test-windows �[38;5;243m01/24/23 19:59:55.607�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:59:55.918�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:59:56.12�[0m [It] can read and write file to remote SMB folder test/e2e/windows/gmsa_full.go:167 �[1mSTEP:�[0m finding the worker node that fulfills this test's assumptions �[38;5;243m01/24/23 19:59:56.323�[0m Jan 24 19:59:56.427: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 24 19:59:56.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gmsa-full-test-windows-3736" for this suite. �[38;5;243m01/24/23 19:59:56.561�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS [SKIPPED] [1.065 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m GMSA support �[38;5;243mtest/e2e/windows/gmsa_full.go:96�[0m �[38;5;14m�[1m[It] can read and write file to remote SMB folder�[0m �[38;5;243mtest/e2e/windows/gmsa_full.go:167�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:59:55.605�[0m Jan 24 19:59:55.605: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-full-test-windows �[38;5;243m01/24/23 19:59:55.607�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:59:55.918�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:59:56.12�[0m [It] can read and write file to remote SMB folder test/e2e/windows/gmsa_full.go:167 �[1mSTEP:�[0m finding the worker node that fulfills this test's assumptions �[38;5;243m01/24/23 19:59:56.323�[0m Jan 24 19:59:56.427: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 24 19:59:56.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gmsa-full-test-windows-3736" for this suite. �[38;5;243m01/24/23 19:59:56.561�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;14mExpected to find exactly one node with the "agentpool=windowsgmsa" label, found 0�[0m �[38;5;14mIn �[1m[It]�[0m�[38;5;14m at: �[1mtest/e2e/windows/gmsa_full.go:173�[0m �[38;5;14mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/windows.glob..func5.1.2() test/e2e/windows/gmsa_full.go:173 +0x645 �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] StatefulSet �[38;5;243mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mScaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/apps/statefulset.go:585�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:59:56.672�[0m Jan 24 19:59:56.672: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/24/23 19:59:56.673�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:59:56.987�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:59:57.19�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-1537 �[38;5;243m01/24/23 19:59:57.393�[0m [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:585 �[1mSTEP:�[0m Initializing watcher for selector baz=blah,foo=bar �[38;5;243m01/24/23 19:59:57.5�[0m �[1mSTEP:�[0m Creating stateful set ss in namespace statefulset-1537 �[38;5;243m01/24/23 19:59:57.603�[0m �[1mSTEP:�[0m Waiting until all stateful set ss replicas will be running in namespace statefulset-1537 �[38;5;243m01/24/23 19:59:57.711�[0m Jan 24 19:59:57.813: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 20:00:07.921: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Confirming that stateful set scale up will halt with unhealthy stateful pod �[38;5;243m01/24/23 20:00:07.921�[0m Jan 24 20:00:08.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 20:00:09.202: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 20:00:09.202: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 20:00:09.202: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 20:00:09.309: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 24 20:00:19.416: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 20:00:19.416: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 20:00:19.840: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999672s Jan 24 20:00:20.947: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.892836542s Jan 24 20:00:22.055: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.78427816s Jan 24 20:00:23.162: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.67753947s Jan 24 20:00:24.269: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.570493049s Jan 24 20:00:25.377: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.463930336s Jan 24 20:00:26.485: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.354587832s Jan 24 20:00:27.593: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.246571466s Jan 24 20:00:28.700: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.139603833s Jan 24 20:00:29.807: INFO: Verifying statefulset ss doesn't scale past 1 for another 32.488927ms �[1mSTEP:�[0m Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1537 �[38;5;243m01/24/23 20:00:30.807�[0m Jan 24 20:00:30.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 20:00:32.108: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 20:00:32.108: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 20:00:32.108: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 20:00:32.215: INFO: Found 1 stateful pods, waiting for 3 Jan 24 20:00:42.331: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 20:00:42.331: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 20:00:42.331: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 20:00:52.330: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 20:00:52.330: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 20:00:52.330: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Verifying that stateful set ss was scaled up in order �[38;5;243m01/24/23 20:00:52.33�[0m �[1mSTEP:�[0m Scale down will halt with unhealthy stateful pod �[38;5;243m01/24/23 20:00:52.33�[0m Jan 24 20:00:52.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 20:00:53.706: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 20:00:53.706: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 20:00:53.706: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 20:00:53.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 20:00:54.889: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 20:00:54.889: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 20:00:54.889: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 20:00:54.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 20:00:56.008: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 20:00:56.008: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 20:00:56.008: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 20:00:56.008: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 20:00:56.227: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 20:00:56.227: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 24 20:00:56.227: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 24 20:00:56.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999678s Jan 24 20:00:57.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.886032552s Jan 24 20:00:58.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.771427782s Jan 24 20:00:59.900: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.656381556s Jan 24 20:01:01.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.540630607s Jan 24 20:01:02.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.425714093s Jan 24 20:01:03.246: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.310897527s Jan 24 20:01:04.361: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.195738493s Jan 24 20:01:05.476: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.081183141s �[1mSTEP:�[0m Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1537 �[38;5;243m01/24/23 20:01:06.477�[0m Jan 24 20:01:06.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 20:01:07.730: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 20:01:07.730: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 20:01:07.730: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 20:01:07.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 20:01:08.883: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 20:01:08.883: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 20:01:08.883: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 20:01:08.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 20:01:10.036: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 20:01:10.036: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 20:01:10.036: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 20:01:10.036: INFO: Scaling statefulset ss to 0 �[1mSTEP:�[0m Verifying that stateful set ss was scaled down in reverse order �[38;5;243m01/24/23 20:01:30.463�[0m [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 24 20:01:30.463: INFO: Deleting all statefulset in ns statefulset-1537 Jan 24 20:01:30.566: INFO: Scaling statefulset ss to 0 Jan 24 20:01:30.874: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 20:01:30.977: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 Jan 24 20:01:31.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "statefulset-1537" for this suite. �[38;5;243m01/24/23 20:01:31.424�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","completed":11,"skipped":558,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [94.858 seconds]�[0m [sig-apps] StatefulSet �[38;5;243mtest/e2e/apps/framework.go:23�[0m Basic StatefulSet functionality [StatefulSetBasic] �[38;5;243mtest/e2e/apps/statefulset.go:101�[0m Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] �[38;5;243mtest/e2e/apps/statefulset.go:585�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 19:59:56.672�[0m Jan 24 19:59:56.672: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/24/23 19:59:56.673�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 19:59:56.987�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 19:59:57.19�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-1537 �[38;5;243m01/24/23 19:59:57.393�[0m [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:585 �[1mSTEP:�[0m Initializing watcher for selector baz=blah,foo=bar �[38;5;243m01/24/23 19:59:57.5�[0m �[1mSTEP:�[0m Creating stateful set ss in namespace statefulset-1537 �[38;5;243m01/24/23 19:59:57.603�[0m �[1mSTEP:�[0m Waiting until all stateful set ss replicas will be running in namespace statefulset-1537 �[38;5;243m01/24/23 19:59:57.711�[0m Jan 24 19:59:57.813: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 20:00:07.921: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Confirming that stateful set scale up will halt with unhealthy stateful pod �[38;5;243m01/24/23 20:00:07.921�[0m Jan 24 20:00:08.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 20:00:09.202: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 20:00:09.202: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 20:00:09.202: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 20:00:09.309: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 24 20:00:19.416: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 20:00:19.416: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 20:00:19.840: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999672s Jan 24 20:00:20.947: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.892836542s Jan 24 20:00:22.055: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.78427816s Jan 24 20:00:23.162: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.67753947s Jan 24 20:00:24.269: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.570493049s Jan 24 20:00:25.377: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.463930336s Jan 24 20:00:26.485: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.354587832s Jan 24 20:00:27.593: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.246571466s Jan 24 20:00:28.700: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.139603833s Jan 24 20:00:29.807: INFO: Verifying statefulset ss doesn't scale past 1 for another 32.488927ms �[1mSTEP:�[0m Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1537 �[38;5;243m01/24/23 20:00:30.807�[0m Jan 24 20:00:30.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 20:00:32.108: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 20:00:32.108: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 20:00:32.108: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 20:00:32.215: INFO: Found 1 stateful pods, waiting for 3 Jan 24 20:00:42.331: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 20:00:42.331: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 20:00:42.331: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 20:00:52.330: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 20:00:52.330: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 20:00:52.330: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Verifying that stateful set ss was scaled up in order �[38;5;243m01/24/23 20:00:52.33�[0m �[1mSTEP:�[0m Scale down will halt with unhealthy stateful pod �[38;5;243m01/24/23 20:00:52.33�[0m Jan 24 20:00:52.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 20:00:53.706: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 20:00:53.706: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 20:00:53.706: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 20:00:53.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 20:00:54.889: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 20:00:54.889: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 20:00:54.889: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 20:00:54.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 20:00:56.008: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 20:00:56.008: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 20:00:56.008: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 20:00:56.008: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 20:00:56.227: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 20:00:56.227: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 24 20:00:56.227: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 24 20:00:56.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999678s Jan 24 20:00:57.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.886032552s Jan 24 20:00:58.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.771427782s Jan 24 20:00:59.900: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.656381556s Jan 24 20:01:01.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.540630607s Jan 24 20:01:02.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.425714093s Jan 24 20:01:03.246: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.310897527s Jan 24 20:01:04.361: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.195738493s Jan 24 20:01:05.476: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.081183141s �[1mSTEP:�[0m Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1537 �[38;5;243m01/24/23 20:01:06.477�[0m Jan 24 20:01:06.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 20:01:07.730: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 20:01:07.730: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 20:01:07.730: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 20:01:07.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 20:01:08.883: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 20:01:08.883: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 20:01:08.883: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 20:01:08.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1537 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 20:01:10.036: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 20:01:10.036: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 20:01:10.036: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 20:01:10.036: INFO: Scaling statefulset ss to 0 �[1mSTEP:�[0m Verifying that stateful set ss was scaled down in reverse order �[38;5;243m01/24/23 20:01:30.463�[0m [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 24 20:01:30.463: INFO: Deleting all statefulset in ns statefulset-1537 Jan 24 20:01:30.566: INFO: Scaling statefulset ss to 0 Jan 24 20:01:30.874: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 20:01:30.977: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 Jan 24 20:01:31.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "statefulset-1537" for this suite. �[38;5;243m01/24/23 20:01:31.424�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould have their auto-restart back-off timer reset on image update [Slow][NodeConformance]�[0m �[38;5;243mtest/e2e/common/node/pods.go:675�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:01:31.533�[0m Jan 24 20:01:31.533: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m01/24/23 20:01:31.535�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:01:31.844�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:01:32.047�[0m [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:193 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] test/e2e/common/node/pods.go:675 Jan 24 20:01:32.361: INFO: Waiting up to 5m0s for pod "pod-back-off-image" in namespace "pods-9594" to be "running and ready" Jan 24 20:01:32.464: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 103.02725ms Jan 24 20:01:32.464: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:01:34.567: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20631878s Jan 24 20:01:34.567: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:01:36.570: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208652537s Jan 24 20:01:36.570: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:01:38.567: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206220017s Jan 24 20:01:38.567: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:01:40.572: INFO: Pod "pod-back-off-image": Phase="Running", Reason="", readiness=true. Elapsed: 8.210709356s Jan 24 20:01:40.572: INFO: The phase of Pod pod-back-off-image is Running (Ready = true) Jan 24 20:01:40.572: INFO: Pod "pod-back-off-image" satisfied condition "running and ready" �[1mSTEP:�[0m getting restart delay-0 �[38;5;243m01/24/23 20:02:40.679�[0m Jan 24 20:02:50.643: INFO: getRestartDelay: restartCount = 3, finishedAt=2023-01-24 20:02:19 +0000 UTC restartedAt=2023-01-24 20:02:49 +0000 UTC (30s) �[1mSTEP:�[0m getting restart delay-1 �[38;5;243m01/24/23 20:02:50.643�[0m Jan 24 20:03:41.555: INFO: getRestartDelay: restartCount = 4, finishedAt=2023-01-24 20:02:54 +0000 UTC restartedAt=2023-01-24 20:03:40 +0000 UTC (46s) �[1mSTEP:�[0m getting restart delay-2 �[38;5;243m01/24/23 20:03:41.556�[0m Jan 24 20:05:14.541: INFO: getRestartDelay: restartCount = 5, finishedAt=2023-01-24 20:03:45 +0000 UTC restartedAt=2023-01-24 20:05:13 +0000 UTC (1m28s) �[1mSTEP:�[0m updating the image �[38;5;243m01/24/23 20:05:14.541�[0m Jan 24 20:05:15.262: INFO: Successfully updated pod "pod-back-off-image" Jan 24 20:05:25.264: INFO: Waiting up to 5m0s for pod "pod-back-off-image" in namespace "pods-9594" to be "running" Jan 24 20:05:25.372: INFO: Pod "pod-back-off-image": Phase="Running", Reason="", readiness=true. Elapsed: 107.135185ms Jan 24 20:05:25.372: INFO: Pod "pod-back-off-image" satisfied condition "running" �[1mSTEP:�[0m get restart delay after image update �[38;5;243m01/24/23 20:05:25.372�[0m Jan 24 20:05:44.190: INFO: Container's last state is not "Terminated". Jan 24 20:05:45.296: INFO: Container's last state is not "Terminated". Jan 24 20:05:46.404: INFO: Container's last state is not "Terminated". Jan 24 20:05:47.510: INFO: Container's last state is not "Terminated". Jan 24 20:05:48.617: INFO: Container's last state is not "Terminated". Jan 24 20:05:49.724: INFO: Container's last state is not "Terminated". Jan 24 20:05:50.830: INFO: Container's last state is not "Terminated". Jan 24 20:05:51.937: INFO: Container's last state is not "Terminated". Jan 24 20:05:53.046: INFO: Container's last state is not "Terminated". Jan 24 20:05:54.153: INFO: Container's last state is not "Terminated". Jan 24 20:05:55.259: INFO: Container's last state is not "Terminated". Jan 24 20:05:56.367: INFO: Container's last state is not "Terminated". Jan 24 20:05:57.474: INFO: Container's last state is not "Terminated". Jan 24 20:05:58.581: INFO: Container's last state is not "Terminated". Jan 24 20:05:59.687: INFO: Container's last state is not "Terminated". Jan 24 20:06:00.794: INFO: Container's last state is not "Terminated". Jan 24 20:06:01.900: INFO: Container's last state is not "Terminated". Jan 24 20:06:03.006: INFO: Container's last state is not "Terminated". Jan 24 20:06:04.114: INFO: getRestartDelay: restartCount = 7, finishedAt=2023-01-24 20:05:24 +0000 UTC restartedAt=2023-01-24 20:05:43 +0000 UTC (19s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:187 Jan 24 20:06:04.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "pods-9594" for this suite. �[38;5;243m01/24/23 20:06:04.247�[0m {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","completed":12,"skipped":607,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [272.823 seconds]�[0m [sig-node] Pods �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] �[38;5;243mtest/e2e/common/node/pods.go:675�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:01:31.533�[0m Jan 24 20:01:31.533: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m01/24/23 20:01:31.535�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:01:31.844�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:01:32.047�[0m [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:193 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] test/e2e/common/node/pods.go:675 Jan 24 20:01:32.361: INFO: Waiting up to 5m0s for pod "pod-back-off-image" in namespace "pods-9594" to be "running and ready" Jan 24 20:01:32.464: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 103.02725ms Jan 24 20:01:32.464: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:01:34.567: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20631878s Jan 24 20:01:34.567: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:01:36.570: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208652537s Jan 24 20:01:36.570: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:01:38.567: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206220017s Jan 24 20:01:38.567: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:01:40.572: INFO: Pod "pod-back-off-image": Phase="Running", Reason="", readiness=true. Elapsed: 8.210709356s Jan 24 20:01:40.572: INFO: The phase of Pod pod-back-off-image is Running (Ready = true) Jan 24 20:01:40.572: INFO: Pod "pod-back-off-image" satisfied condition "running and ready" �[1mSTEP:�[0m getting restart delay-0 �[38;5;243m01/24/23 20:02:40.679�[0m Jan 24 20:02:50.643: INFO: getRestartDelay: restartCount = 3, finishedAt=2023-01-24 20:02:19 +0000 UTC restartedAt=2023-01-24 20:02:49 +0000 UTC (30s) �[1mSTEP:�[0m getting restart delay-1 �[38;5;243m01/24/23 20:02:50.643�[0m Jan 24 20:03:41.555: INFO: getRestartDelay: restartCount = 4, finishedAt=2023-01-24 20:02:54 +0000 UTC restartedAt=2023-01-24 20:03:40 +0000 UTC (46s) �[1mSTEP:�[0m getting restart delay-2 �[38;5;243m01/24/23 20:03:41.556�[0m Jan 24 20:05:14.541: INFO: getRestartDelay: restartCount = 5, finishedAt=2023-01-24 20:03:45 +0000 UTC restartedAt=2023-01-24 20:05:13 +0000 UTC (1m28s) �[1mSTEP:�[0m updating the image �[38;5;243m01/24/23 20:05:14.541�[0m Jan 24 20:05:15.262: INFO: Successfully updated pod "pod-back-off-image" Jan 24 20:05:25.264: INFO: Waiting up to 5m0s for pod "pod-back-off-image" in namespace "pods-9594" to be "running" Jan 24 20:05:25.372: INFO: Pod "pod-back-off-image": Phase="Running", Reason="", readiness=true. Elapsed: 107.135185ms Jan 24 20:05:25.372: INFO: Pod "pod-back-off-image" satisfied condition "running" �[1mSTEP:�[0m get restart delay after image update �[38;5;243m01/24/23 20:05:25.372�[0m Jan 24 20:05:44.190: INFO: Container's last state is not "Terminated". Jan 24 20:05:45.296: INFO: Container's last state is not "Terminated". Jan 24 20:05:46.404: INFO: Container's last state is not "Terminated". Jan 24 20:05:47.510: INFO: Container's last state is not "Terminated". Jan 24 20:05:48.617: INFO: Container's last state is not "Terminated". Jan 24 20:05:49.724: INFO: Container's last state is not "Terminated". Jan 24 20:05:50.830: INFO: Container's last state is not "Terminated". Jan 24 20:05:51.937: INFO: Container's last state is not "Terminated". Jan 24 20:05:53.046: INFO: Container's last state is not "Terminated". Jan 24 20:05:54.153: INFO: Container's last state is not "Terminated". Jan 24 20:05:55.259: INFO: Container's last state is not "Terminated". Jan 24 20:05:56.367: INFO: Container's last state is not "Terminated". Jan 24 20:05:57.474: INFO: Container's last state is not "Terminated". Jan 24 20:05:58.581: INFO: Container's last state is not "Terminated". Jan 24 20:05:59.687: INFO: Container's last state is not "Terminated". Jan 24 20:06:00.794: INFO: Container's last state is not "Terminated". Jan 24 20:06:01.900: INFO: Container's last state is not "Terminated". Jan 24 20:06:03.006: INFO: Container's last state is not "Terminated". Jan 24 20:06:04.114: INFO: getRestartDelay: restartCount = 7, finishedAt=2023-01-24 20:05:24 +0000 UTC restartedAt=2023-01-24 20:05:43 +0000 UTC (19s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:187 Jan 24 20:06:04.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "pods-9594" for this suite. �[38;5;243m01/24/23 20:06:04.247�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates lower priority pod preemption by critical pod [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:218�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:06:04.372�[0m Jan 24 20:06:04.372: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/24/23 20:06:04.373�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:06:04.683�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:06:04.885�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 24 20:06:05.417: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 20:07:06.394: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:218 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m01/24/23 20:07:06.5�[0m Jan 24 20:07:06.732: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 24 20:07:06.840: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 24 20:07:07.065: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 24 20:07:07.174: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m01/24/23 20:07:07.174�[0m Jan 24 20:07:07.174: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-3466" to be "running" Jan 24 20:07:07.276: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 101.915037ms Jan 24 20:07:09.384: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20967863s Jan 24 20:07:11.384: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209626213s Jan 24 20:07:13.383: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 6.209452794s Jan 24 20:07:13.383: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Jan 24 20:07:13.383: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-3466" to be "running" Jan 24 20:07:13.490: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 106.259029ms Jan 24 20:07:13.490: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Jan 24 20:07:13.490: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-3466" to be "running" Jan 24 20:07:13.596: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 106.468704ms Jan 24 20:07:13.596: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Jan 24 20:07:13.596: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-3466" to be "running" Jan 24 20:07:13.702: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 105.973869ms Jan 24 20:07:13.702: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a critical pod that use same resources as that of a lower priority pod �[38;5;243m01/24/23 20:07:13.702�[0m Jan 24 20:07:13.816: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Jan 24 20:07:13.918: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 102.757839ms Jan 24 20:07:16.021: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205466378s Jan 24 20:07:18.026: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210216675s Jan 24 20:07:20.026: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210610471s Jan 24 20:07:22.026: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.210114493s Jan 24 20:07:22.026: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 24 20:07:22.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-3466" for this suite. �[38;5;243m01/24/23 20:07:22.91�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","completed":13,"skipped":904,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [79.212 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates lower priority pod preemption by critical pod [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:218�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:06:04.372�[0m Jan 24 20:06:04.372: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/24/23 20:06:04.373�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:06:04.683�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:06:04.885�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 24 20:06:05.417: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 20:07:06.394: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:218 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m01/24/23 20:07:06.5�[0m Jan 24 20:07:06.732: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 24 20:07:06.840: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 24 20:07:07.065: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 24 20:07:07.174: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m01/24/23 20:07:07.174�[0m Jan 24 20:07:07.174: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-3466" to be "running" Jan 24 20:07:07.276: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 101.915037ms Jan 24 20:07:09.384: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20967863s Jan 24 20:07:11.384: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209626213s Jan 24 20:07:13.383: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 6.209452794s Jan 24 20:07:13.383: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Jan 24 20:07:13.383: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-3466" to be "running" Jan 24 20:07:13.490: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 106.259029ms Jan 24 20:07:13.490: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Jan 24 20:07:13.490: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-3466" to be "running" Jan 24 20:07:13.596: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 106.468704ms Jan 24 20:07:13.596: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Jan 24 20:07:13.596: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-3466" to be "running" Jan 24 20:07:13.702: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 105.973869ms Jan 24 20:07:13.702: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a critical pod that use same resources as that of a lower priority pod �[38;5;243m01/24/23 20:07:13.702�[0m Jan 24 20:07:13.816: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Jan 24 20:07:13.918: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 102.757839ms Jan 24 20:07:16.021: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205466378s Jan 24 20:07:18.026: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210216675s Jan 24 20:07:20.026: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210610471s Jan 24 20:07:22.026: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.210114493s Jan 24 20:07:22.026: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 24 20:07:22.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-3466" for this suite. �[38;5;243m01/24/23 20:07:22.91�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould support cascading deletion of custom resources�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:905�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:07:23.587�[0m Jan 24 20:07:23.587: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/24/23 20:07:23.588�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:07:23.902�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:07:24.105�[0m [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Jan 24 20:07:24.308: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 20:07:26.977: INFO: created owner resource "ownergl4zl" Jan 24 20:07:27.085: INFO: created dependent resource "dependent87mqg" Jan 24 20:07:27.296: INFO: created canary resource "canarysq9fg" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 24 20:07:58.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-7731" for this suite. �[38;5;243m01/24/23 20:07:58.164�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","completed":14,"skipped":929,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [34.684 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should support cascading deletion of custom resources �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:905�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:07:23.587�[0m Jan 24 20:07:23.587: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/24/23 20:07:23.588�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:07:23.902�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:07:24.105�[0m [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Jan 24 20:07:24.308: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 20:07:26.977: INFO: created owner resource "ownergl4zl" Jan 24 20:07:27.085: INFO: created dependent resource "dependent87mqg" Jan 24 20:07:27.296: INFO: created canary resource "canarysq9fg" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 24 20:07:58.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-7731" for this suite. �[38;5;243m01/24/23 20:07:58.164�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Density [Serial] [Slow] �[38;5;243mcreate a batch of pods�[0m �[1mlatency/resource should be within limit when create 10 pods with 0s interval�[0m �[38;5;243mtest/e2e/windows/density.go:68�[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:07:58.276�[0m Jan 24 20:07:58.276: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename density-test-windows �[38;5;243m01/24/23 20:07:58.277�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:07:58.589�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:07:58.792�[0m [It] latency/resource should be within limit when create 10 pods with 0s interval test/e2e/windows/density.go:68 �[1mSTEP:�[0m Creating a batch of pods �[38;5;243m01/24/23 20:07:58.996�[0m �[1mSTEP:�[0m Waiting for all Pods to be observed by the watch... �[38;5;243m01/24/23 20:07:58.997�[0m Jan 24 20:08:19.114: INFO: Waiting for pod test-85079dad-df42-4e52-a155-e2569d972d24 to disappear Jan 24 20:08:19.119: INFO: Waiting for pod test-19ad4826-e0b9-43e5-8220-d613597edd6e to disappear Jan 24 20:08:19.123: INFO: Waiting for pod test-e59f7ddf-953f-4caa-ac96-af0adb99e2a3 to disappear Jan 24 20:08:19.123: INFO: Waiting for pod test-0ffc403c-141e-4567-8449-559a7d7c76f6 to disappear Jan 24 20:08:19.125: INFO: Waiting for pod test-0c294513-5f72-49d6-85ab-3ea63e674645 to disappear Jan 24 20:08:19.254: INFO: Waiting for pod test-e18e4e41-df72-4134-ad44-bc5cee3a1710 to disappear Jan 24 20:08:19.256: INFO: Waiting for pod test-e72ce6b1-7458-456f-8985-d1c6a01d5fef to disappear Jan 24 20:08:19.280: INFO: Pod test-85079dad-df42-4e52-a155-e2569d972d24 still exists Jan 24 20:08:19.285: INFO: Waiting for pod test-9a9ea46d-2c30-4d31-9f6b-0cad5d5d8b3b to disappear Jan 24 20:08:19.287: INFO: Waiting for pod test-d430bee2-f1d8-43b9-9cc2-b1753defe148 to disappear Jan 24 20:08:19.287: INFO: Waiting for pod test-171574d6-5011-4625-b0d8-04d0be17345d to disappear Jan 24 20:08:19.288: INFO: Pod test-19ad4826-e0b9-43e5-8220-d613597edd6e still exists Jan 24 20:08:19.332: INFO: Pod test-0ffc403c-141e-4567-8449-559a7d7c76f6 still exists Jan 24 20:08:19.352: INFO: Pod test-e59f7ddf-953f-4caa-ac96-af0adb99e2a3 still exists Jan 24 20:08:19.352: INFO: Pod test-0c294513-5f72-49d6-85ab-3ea63e674645 still exists Jan 24 20:08:19.374: INFO: Pod test-e18e4e41-df72-4134-ad44-bc5cee3a1710 still exists Jan 24 20:08:19.392: INFO: Pod test-e72ce6b1-7458-456f-8985-d1c6a01d5fef still exists Jan 24 20:08:19.406: INFO: Pod test-9a9ea46d-2c30-4d31-9f6b-0cad5d5d8b3b still exists Jan 24 20:08:19.416: INFO: Pod test-d430bee2-f1d8-43b9-9cc2-b1753defe148 still exists Jan 24 20:08:19.424: INFO: Pod test-171574d6-5011-4625-b0d8-04d0be17345d still exists Jan 24 20:08:49.283: INFO: Waiting for pod test-85079dad-df42-4e52-a155-e2569d972d24 to disappear Jan 24 20:08:49.288: INFO: Waiting for pod test-19ad4826-e0b9-43e5-8220-d613597edd6e to disappear Jan 24 20:08:49.333: INFO: Waiting for pod test-0ffc403c-141e-4567-8449-559a7d7c76f6 to disappear Jan 24 20:08:49.353: INFO: Waiting for pod test-0c294513-5f72-49d6-85ab-3ea63e674645 to disappear Jan 24 20:08:49.353: INFO: Waiting for pod test-e59f7ddf-953f-4caa-ac96-af0adb99e2a3 to disappear Jan 24 20:08:49.374: INFO: Waiting for pod test-e18e4e41-df72-4134-ad44-bc5cee3a1710 to disappear Jan 24 20:08:49.392: INFO: Pod test-85079dad-df42-4e52-a155-e2569d972d24 no longer exists Jan 24 20:08:49.392: INFO: Pod test-19ad4826-e0b9-43e5-8220-d613597edd6e no longer exists Jan 24 20:08:49.393: INFO: Waiting for pod test-e72ce6b1-7458-456f-8985-d1c6a01d5fef to disappear Jan 24 20:08:49.407: INFO: Waiting for pod test-9a9ea46d-2c30-4d31-9f6b-0cad5d5d8b3b to disappear Jan 24 20:08:49.417: INFO: Waiting for pod test-d430bee2-f1d8-43b9-9cc2-b1753defe148 to disappear Jan 24 20:08:49.425: INFO: Waiting for pod test-171574d6-5011-4625-b0d8-04d0be17345d to disappear Jan 24 20:08:49.435: INFO: Pod test-0ffc403c-141e-4567-8449-559a7d7c76f6 no longer exists Jan 24 20:08:49.455: INFO: Pod test-e59f7ddf-953f-4caa-ac96-af0adb99e2a3 no longer exists Jan 24 20:08:49.456: INFO: Pod test-0c294513-5f72-49d6-85ab-3ea63e674645 no longer exists Jan 24 20:08:49.476: INFO: Pod test-e18e4e41-df72-4134-ad44-bc5cee3a1710 no longer exists Jan 24 20:08:49.495: INFO: Pod test-e72ce6b1-7458-456f-8985-d1c6a01d5fef no longer exists Jan 24 20:08:49.509: INFO: Pod test-9a9ea46d-2c30-4d31-9f6b-0cad5d5d8b3b no longer exists Jan 24 20:08:49.519: INFO: Pod test-d430bee2-f1d8-43b9-9cc2-b1753defe148 no longer exists Jan 24 20:08:49.526: INFO: Pod test-171574d6-5011-4625-b0d8-04d0be17345d no longer exists [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 24 20:08:49.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "density-test-windows-4155" for this suite. �[38;5;243m01/24/23 20:08:49.634�[0m {"msg":"PASSED [sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","completed":15,"skipped":981,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [51.465 seconds]�[0m [sig-windows] [Feature:Windows] Density [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m create a batch of pods �[38;5;243mtest/e2e/windows/density.go:47�[0m latency/resource should be within limit when create 10 pods with 0s interval �[38;5;243mtest/e2e/windows/density.go:68�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:07:58.276�[0m Jan 24 20:07:58.276: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename density-test-windows �[38;5;243m01/24/23 20:07:58.277�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:07:58.589�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:07:58.792�[0m [It] latency/resource should be within limit when create 10 pods with 0s interval test/e2e/windows/density.go:68 �[1mSTEP:�[0m Creating a batch of pods �[38;5;243m01/24/23 20:07:58.996�[0m �[1mSTEP:�[0m Waiting for all Pods to be observed by the watch... �[38;5;243m01/24/23 20:07:58.997�[0m Jan 24 20:08:19.114: INFO: Waiting for pod test-85079dad-df42-4e52-a155-e2569d972d24 to disappear Jan 24 20:08:19.119: INFO: Waiting for pod test-19ad4826-e0b9-43e5-8220-d613597edd6e to disappear Jan 24 20:08:19.123: INFO: Waiting for pod test-e59f7ddf-953f-4caa-ac96-af0adb99e2a3 to disappear Jan 24 20:08:19.123: INFO: Waiting for pod test-0ffc403c-141e-4567-8449-559a7d7c76f6 to disappear Jan 24 20:08:19.125: INFO: Waiting for pod test-0c294513-5f72-49d6-85ab-3ea63e674645 to disappear Jan 24 20:08:19.254: INFO: Waiting for pod test-e18e4e41-df72-4134-ad44-bc5cee3a1710 to disappear Jan 24 20:08:19.256: INFO: Waiting for pod test-e72ce6b1-7458-456f-8985-d1c6a01d5fef to disappear Jan 24 20:08:19.280: INFO: Pod test-85079dad-df42-4e52-a155-e2569d972d24 still exists Jan 24 20:08:19.285: INFO: Waiting for pod test-9a9ea46d-2c30-4d31-9f6b-0cad5d5d8b3b to disappear Jan 24 20:08:19.287: INFO: Waiting for pod test-d430bee2-f1d8-43b9-9cc2-b1753defe148 to disappear Jan 24 20:08:19.287: INFO: Waiting for pod test-171574d6-5011-4625-b0d8-04d0be17345d to disappear Jan 24 20:08:19.288: INFO: Pod test-19ad4826-e0b9-43e5-8220-d613597edd6e still exists Jan 24 20:08:19.332: INFO: Pod test-0ffc403c-141e-4567-8449-559a7d7c76f6 still exists Jan 24 20:08:19.352: INFO: Pod test-e59f7ddf-953f-4caa-ac96-af0adb99e2a3 still exists Jan 24 20:08:19.352: INFO: Pod test-0c294513-5f72-49d6-85ab-3ea63e674645 still exists Jan 24 20:08:19.374: INFO: Pod test-e18e4e41-df72-4134-ad44-bc5cee3a1710 still exists Jan 24 20:08:19.392: INFO: Pod test-e72ce6b1-7458-456f-8985-d1c6a01d5fef still exists Jan 24 20:08:19.406: INFO: Pod test-9a9ea46d-2c30-4d31-9f6b-0cad5d5d8b3b still exists Jan 24 20:08:19.416: INFO: Pod test-d430bee2-f1d8-43b9-9cc2-b1753defe148 still exists Jan 24 20:08:19.424: INFO: Pod test-171574d6-5011-4625-b0d8-04d0be17345d still exists Jan 24 20:08:49.283: INFO: Waiting for pod test-85079dad-df42-4e52-a155-e2569d972d24 to disappear Jan 24 20:08:49.288: INFO: Waiting for pod test-19ad4826-e0b9-43e5-8220-d613597edd6e to disappear Jan 24 20:08:49.333: INFO: Waiting for pod test-0ffc403c-141e-4567-8449-559a7d7c76f6 to disappear Jan 24 20:08:49.353: INFO: Waiting for pod test-0c294513-5f72-49d6-85ab-3ea63e674645 to disappear Jan 24 20:08:49.353: INFO: Waiting for pod test-e59f7ddf-953f-4caa-ac96-af0adb99e2a3 to disappear Jan 24 20:08:49.374: INFO: Waiting for pod test-e18e4e41-df72-4134-ad44-bc5cee3a1710 to disappear Jan 24 20:08:49.392: INFO: Pod test-85079dad-df42-4e52-a155-e2569d972d24 no longer exists Jan 24 20:08:49.392: INFO: Pod test-19ad4826-e0b9-43e5-8220-d613597edd6e no longer exists Jan 24 20:08:49.393: INFO: Waiting for pod test-e72ce6b1-7458-456f-8985-d1c6a01d5fef to disappear Jan 24 20:08:49.407: INFO: Waiting for pod test-9a9ea46d-2c30-4d31-9f6b-0cad5d5d8b3b to disappear Jan 24 20:08:49.417: INFO: Waiting for pod test-d430bee2-f1d8-43b9-9cc2-b1753defe148 to disappear Jan 24 20:08:49.425: INFO: Waiting for pod test-171574d6-5011-4625-b0d8-04d0be17345d to disappear Jan 24 20:08:49.435: INFO: Pod test-0ffc403c-141e-4567-8449-559a7d7c76f6 no longer exists Jan 24 20:08:49.455: INFO: Pod test-e59f7ddf-953f-4caa-ac96-af0adb99e2a3 no longer exists Jan 24 20:08:49.456: INFO: Pod test-0c294513-5f72-49d6-85ab-3ea63e674645 no longer exists Jan 24 20:08:49.476: INFO: Pod test-e18e4e41-df72-4134-ad44-bc5cee3a1710 no longer exists Jan 24 20:08:49.495: INFO: Pod test-e72ce6b1-7458-456f-8985-d1c6a01d5fef no longer exists Jan 24 20:08:49.509: INFO: Pod test-9a9ea46d-2c30-4d31-9f6b-0cad5d5d8b3b no longer exists Jan 24 20:08:49.519: INFO: Pod test-d430bee2-f1d8-43b9-9cc2-b1753defe148 no longer exists Jan 24 20:08:49.526: INFO: Pod test-171574d6-5011-4625-b0d8-04d0be17345d no longer exists [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 24 20:08:49.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "density-test-windows-4155" for this suite. �[38;5;243m01/24/23 20:08:49.634�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Kubelet [Slow] �[38;5;243mkubelet GMSA support �[0mwhen creating a pod with correct GMSA credential specs�[0m �[1mpasses the credential specs down to the Pod's containers�[0m �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:45�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:08:49.746�[0m Jan 24 20:08:49.746: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-kubelet-test-windows �[38;5;243m01/24/23 20:08:49.747�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:08:50.06�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:08:50.263�[0m [It] passes the credential specs down to the Pod's containers test/e2e/windows/gmsa_kubelet.go:45 �[1mSTEP:�[0m creating a pod with correct GMSA specs �[38;5;243m01/24/23 20:08:50.466�[0m Jan 24 20:08:50.581: INFO: Waiting up to 5m0s for pod "with-correct-gmsa-specs" in namespace "gmsa-kubelet-test-windows-6013" to be "running and ready" Jan 24 20:08:50.684: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 102.717004ms Jan 24 20:08:50.684: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:08:52.788: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20706925s Jan 24 20:08:52.788: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:08:54.790: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208604737s Jan 24 20:08:54.790: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:08:56.788: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207249872s Jan 24 20:08:56.788: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:08:58.790: INFO: Pod "with-correct-gmsa-specs": Phase="Running", Reason="", readiness=true. Elapsed: 8.208760951s Jan 24 20:08:58.790: INFO: The phase of Pod with-correct-gmsa-specs is Running (Ready = true) Jan 24 20:08:58.790: INFO: Pod "with-correct-gmsa-specs" satisfied condition "running and ready" �[1mSTEP:�[0m checking the domain reported by nltest in the containers �[38;5;243m01/24/23 20:08:58.894�[0m Jan 24 20:08:58.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-6013 exec --namespace=gmsa-kubelet-test-windows-6013 with-correct-gmsa-specs --container=container1 -- nltest /PARENTDOMAIN' Jan 24 20:09:00.058: INFO: stderr: "" Jan 24 20:09:00.058: INFO: stdout: "acme.com. (1)\r\nThe command completed successfully\r\n" Jan 24 20:09:00.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-6013 exec --namespace=gmsa-kubelet-test-windows-6013 with-correct-gmsa-specs --container=container2 -- nltest /PARENTDOMAIN' Jan 24 20:09:01.205: INFO: stderr: "" Jan 24 20:09:01.205: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n" [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:187 Jan 24 20:09:01.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gmsa-kubelet-test-windows-6013" for this suite. �[38;5;243m01/24/23 20:09:01.312�[0m {"msg":"PASSED [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] kubelet GMSA support when creating a pod with correct GMSA credential specs passes the credential specs down to the Pod's containers","completed":16,"skipped":1018,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [11.672 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m kubelet GMSA support �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:43�[0m when creating a pod with correct GMSA credential specs �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:44�[0m passes the credential specs down to the Pod's containers �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:45�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:08:49.746�[0m Jan 24 20:08:49.746: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-kubelet-test-windows �[38;5;243m01/24/23 20:08:49.747�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:08:50.06�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:08:50.263�[0m [It] passes the credential specs down to the Pod's containers test/e2e/windows/gmsa_kubelet.go:45 �[1mSTEP:�[0m creating a pod with correct GMSA specs �[38;5;243m01/24/23 20:08:50.466�[0m Jan 24 20:08:50.581: INFO: Waiting up to 5m0s for pod "with-correct-gmsa-specs" in namespace "gmsa-kubelet-test-windows-6013" to be "running and ready" Jan 24 20:08:50.684: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 102.717004ms Jan 24 20:08:50.684: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:08:52.788: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20706925s Jan 24 20:08:52.788: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:08:54.790: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208604737s Jan 24 20:08:54.790: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:08:56.788: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207249872s Jan 24 20:08:56.788: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 24 20:08:58.790: INFO: Pod "with-correct-gmsa-specs": Phase="Running", Reason="", readiness=true. Elapsed: 8.208760951s Jan 24 20:08:58.790: INFO: The phase of Pod with-correct-gmsa-specs is Running (Ready = true) Jan 24 20:08:58.790: INFO: Pod "with-correct-gmsa-specs" satisfied condition "running and ready" �[1mSTEP:�[0m checking the domain reported by nltest in the containers �[38;5;243m01/24/23 20:08:58.894�[0m Jan 24 20:08:58.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-6013 exec --namespace=gmsa-kubelet-test-windows-6013 with-correct-gmsa-specs --container=container1 -- nltest /PARENTDOMAIN' Jan 24 20:09:00.058: INFO: stderr: "" Jan 24 20:09:00.058: INFO: stdout: "acme.com. (1)\r\nThe command completed successfully\r\n" Jan 24 20:09:00.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-6013 exec --namespace=gmsa-kubelet-test-windows-6013 with-correct-gmsa-specs --container=container2 -- nltest /PARENTDOMAIN' Jan 24 20:09:01.205: INFO: stderr: "" Jan 24 20:09:01.205: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n" [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:187 Jan 24 20:09:01.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gmsa-kubelet-test-windows-6013" for this suite. �[38;5;243m01/24/23 20:09:01.312�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that NodeSelector is respected if matching [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/predicates.go:461�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:09:01.422�[0m Jan 24 20:09:01.422: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m01/24/23 20:09:01.423�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:01.735�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:09:01.937�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 24 20:09:02.141: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 24 20:09:02.362: INFO: Waiting for terminating namespaces to be deleted... Jan 24 20:09:02.466: INFO: Logging pods the apiserver thinks is on node capz-conf-jzg2c before test Jan 24 20:09:02.579: INFO: calico-node-windows-77tct from calico-system started at 2023-01-24 19:20:29 +0000 UTC (2 container statuses recorded) Jan 24 20:09:02.579: INFO: Container calico-node-felix ready: true, restart count 1 Jan 24 20:09:02.579: INFO: Container calico-node-startup ready: true, restart count 0 Jan 24 20:09:02.579: INFO: with-correct-gmsa-specs from gmsa-kubelet-test-windows-6013 started at 2023-01-24 20:08:50 +0000 UTC (2 container statuses recorded) Jan 24 20:09:02.579: INFO: Container container1 ready: true, restart count 0 Jan 24 20:09:02.579: INFO: Container container2 ready: true, restart count 0 Jan 24 20:09:02.579: INFO: containerd-logger-xt7tr from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.579: INFO: Container containerd-logger ready: true, restart count 0 Jan 24 20:09:02.579: INFO: csi-azuredisk-node-win-l79cl from kube-system started at 2023-01-24 19:21:00 +0000 UTC (3 container statuses recorded) Jan 24 20:09:02.579: INFO: Container azuredisk ready: true, restart count 0 Jan 24 20:09:02.579: INFO: Container liveness-probe ready: true, restart count 0 Jan 24 20:09:02.579: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 24 20:09:02.579: INFO: csi-proxy-xnqhl from kube-system started at 2023-01-24 19:21:00 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.579: INFO: Container csi-proxy ready: true, restart count 0 Jan 24 20:09:02.579: INFO: kube-proxy-windows-6szqk from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.579: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 20:09:02.579: INFO: Logging pods the apiserver thinks is on node capz-conf-s4kcn before test Jan 24 20:09:02.690: INFO: calico-node-windows-t9nl5 from calico-system started at 2023-01-24 19:20:29 +0000 UTC (2 container statuses recorded) Jan 24 20:09:02.690: INFO: Container calico-node-felix ready: true, restart count 1 Jan 24 20:09:02.690: INFO: Container calico-node-startup ready: true, restart count 0 Jan 24 20:09:02.690: INFO: containerd-logger-6ndvk from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.690: INFO: Container containerd-logger ready: true, restart count 0 Jan 24 20:09:02.690: INFO: csi-azuredisk-node-win-8mbvt from kube-system started at 2023-01-24 19:20:59 +0000 UTC (3 container statuses recorded) Jan 24 20:09:02.690: INFO: Container azuredisk ready: true, restart count 0 Jan 24 20:09:02.690: INFO: Container liveness-probe ready: true, restart count 0 Jan 24 20:09:02.690: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 24 20:09:02.690: INFO: csi-proxy-t452z from kube-system started at 2023-01-24 19:20:59 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.690: INFO: Container csi-proxy ready: true, restart count 0 Jan 24 20:09:02.690: INFO: kube-proxy-windows-9qxpl from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.690: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:461 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/24/23 20:09:02.69�[0m Jan 24 20:09:02.801: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3100" to be "running" Jan 24 20:09:02.904: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 102.536785ms Jan 24 20:09:05.010: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208551326s Jan 24 20:09:07.007: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206091776s Jan 24 20:09:09.007: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.205744849s Jan 24 20:09:09.007: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/24/23 20:09:09.109�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m01/24/23 20:09:09.222�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-edd53547-47a3-4478-9a11-7c1955065ed4 42 �[38;5;243m01/24/23 20:09:09.334�[0m �[1mSTEP:�[0m Trying to relaunch the pod, now with labels. �[38;5;243m01/24/23 20:09:09.437�[0m Jan 24 20:09:09.546: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-3100" to be "not pending" Jan 24 20:09:09.649: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 102.763271ms Jan 24 20:09:11.752: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205868318s Jan 24 20:09:13.752: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206081823s Jan 24 20:09:15.753: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 6.206795032s Jan 24 20:09:15.753: INFO: Pod "with-labels" satisfied condition "not pending" �[1mSTEP:�[0m removing the label kubernetes.io/e2e-edd53547-47a3-4478-9a11-7c1955065ed4 off the node capz-conf-s4kcn �[38;5;243m01/24/23 20:09:15.856�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-edd53547-47a3-4478-9a11-7c1955065ed4 �[38;5;243m01/24/23 20:09:16.07�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Jan 24 20:09:16.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-pred-3100" for this suite. �[38;5;243m01/24/23 20:09:16.281�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","completed":17,"skipped":1054,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [14.966 seconds]�[0m [sig-scheduling] SchedulerPredicates [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates that NodeSelector is respected if matching [Conformance] �[38;5;243mtest/e2e/scheduling/predicates.go:461�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:09:01.422�[0m Jan 24 20:09:01.422: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m01/24/23 20:09:01.423�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:01.735�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:09:01.937�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 24 20:09:02.141: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 24 20:09:02.362: INFO: Waiting for terminating namespaces to be deleted... Jan 24 20:09:02.466: INFO: Logging pods the apiserver thinks is on node capz-conf-jzg2c before test Jan 24 20:09:02.579: INFO: calico-node-windows-77tct from calico-system started at 2023-01-24 19:20:29 +0000 UTC (2 container statuses recorded) Jan 24 20:09:02.579: INFO: Container calico-node-felix ready: true, restart count 1 Jan 24 20:09:02.579: INFO: Container calico-node-startup ready: true, restart count 0 Jan 24 20:09:02.579: INFO: with-correct-gmsa-specs from gmsa-kubelet-test-windows-6013 started at 2023-01-24 20:08:50 +0000 UTC (2 container statuses recorded) Jan 24 20:09:02.579: INFO: Container container1 ready: true, restart count 0 Jan 24 20:09:02.579: INFO: Container container2 ready: true, restart count 0 Jan 24 20:09:02.579: INFO: containerd-logger-xt7tr from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.579: INFO: Container containerd-logger ready: true, restart count 0 Jan 24 20:09:02.579: INFO: csi-azuredisk-node-win-l79cl from kube-system started at 2023-01-24 19:21:00 +0000 UTC (3 container statuses recorded) Jan 24 20:09:02.579: INFO: Container azuredisk ready: true, restart count 0 Jan 24 20:09:02.579: INFO: Container liveness-probe ready: true, restart count 0 Jan 24 20:09:02.579: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 24 20:09:02.579: INFO: csi-proxy-xnqhl from kube-system started at 2023-01-24 19:21:00 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.579: INFO: Container csi-proxy ready: true, restart count 0 Jan 24 20:09:02.579: INFO: kube-proxy-windows-6szqk from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.579: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 20:09:02.579: INFO: Logging pods the apiserver thinks is on node capz-conf-s4kcn before test Jan 24 20:09:02.690: INFO: calico-node-windows-t9nl5 from calico-system started at 2023-01-24 19:20:29 +0000 UTC (2 container statuses recorded) Jan 24 20:09:02.690: INFO: Container calico-node-felix ready: true, restart count 1 Jan 24 20:09:02.690: INFO: Container calico-node-startup ready: true, restart count 0 Jan 24 20:09:02.690: INFO: containerd-logger-6ndvk from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.690: INFO: Container containerd-logger ready: true, restart count 0 Jan 24 20:09:02.690: INFO: csi-azuredisk-node-win-8mbvt from kube-system started at 2023-01-24 19:20:59 +0000 UTC (3 container statuses recorded) Jan 24 20:09:02.690: INFO: Container azuredisk ready: true, restart count 0 Jan 24 20:09:02.690: INFO: Container liveness-probe ready: true, restart count 0 Jan 24 20:09:02.690: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 24 20:09:02.690: INFO: csi-proxy-t452z from kube-system started at 2023-01-24 19:20:59 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.690: INFO: Container csi-proxy ready: true, restart count 0 Jan 24 20:09:02.690: INFO: kube-proxy-windows-9qxpl from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:09:02.690: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:461 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/24/23 20:09:02.69�[0m Jan 24 20:09:02.801: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3100" to be "running" Jan 24 20:09:02.904: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 102.536785ms Jan 24 20:09:05.010: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208551326s Jan 24 20:09:07.007: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206091776s Jan 24 20:09:09.007: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.205744849s Jan 24 20:09:09.007: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/24/23 20:09:09.109�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m01/24/23 20:09:09.222�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-edd53547-47a3-4478-9a11-7c1955065ed4 42 �[38;5;243m01/24/23 20:09:09.334�[0m �[1mSTEP:�[0m Trying to relaunch the pod, now with labels. �[38;5;243m01/24/23 20:09:09.437�[0m Jan 24 20:09:09.546: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-3100" to be "not pending" Jan 24 20:09:09.649: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 102.763271ms Jan 24 20:09:11.752: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205868318s Jan 24 20:09:13.752: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206081823s Jan 24 20:09:15.753: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 6.206795032s Jan 24 20:09:15.753: INFO: Pod "with-labels" satisfied condition "not pending" �[1mSTEP:�[0m removing the label kubernetes.io/e2e-edd53547-47a3-4478-9a11-7c1955065ed4 off the node capz-conf-s4kcn �[38;5;243m01/24/23 20:09:15.856�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-edd53547-47a3-4478-9a11-7c1955065ed4 �[38;5;243m01/24/23 20:09:16.07�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Jan 24 20:09:16.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-pred-3100" for this suite. �[38;5;243m01/24/23 20:09:16.281�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould ensure that all services are removed when a namespace is deleted [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/namespace.go:250�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:09:16.396�[0m Jan 24 20:09:16.396: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/24/23 20:09:16.397�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:16.709�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:09:16.914�[0m [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:250 �[1mSTEP:�[0m Creating a test namespace �[38;5;243m01/24/23 20:09:17.117�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:17.43�[0m �[1mSTEP:�[0m Creating a service in the namespace �[38;5;243m01/24/23 20:09:17.632�[0m �[1mSTEP:�[0m Deleting the namespace �[38;5;243m01/24/23 20:09:17.75�[0m �[1mSTEP:�[0m Waiting for the namespace to be removed. �[38;5;243m01/24/23 20:09:17.857�[0m �[1mSTEP:�[0m Recreating the namespace �[38;5;243m01/24/23 20:09:23.959�[0m �[1mSTEP:�[0m Verifying there is no service in the namespace �[38;5;243m01/24/23 20:09:24.269�[0m [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 24 20:09:24.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-8352" for this suite. �[38;5;243m01/24/23 20:09:24.478�[0m �[1mSTEP:�[0m Destroying namespace "nsdeletetest-8642" for this suite. �[38;5;243m01/24/23 20:09:24.585�[0m Jan 24 20:09:24.687: INFO: Namespace nsdeletetest-8642 was already deleted �[1mSTEP:�[0m Destroying namespace "nsdeletetest-5495" for this suite. �[38;5;243m01/24/23 20:09:24.687�[0m {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","completed":18,"skipped":1189,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [8.398 seconds]�[0m [sig-api-machinery] Namespaces [Serial] �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should ensure that all services are removed when a namespace is deleted [Conformance] �[38;5;243mtest/e2e/apimachinery/namespace.go:250�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:09:16.396�[0m Jan 24 20:09:16.396: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/24/23 20:09:16.397�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:16.709�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:09:16.914�[0m [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:250 �[1mSTEP:�[0m Creating a test namespace �[38;5;243m01/24/23 20:09:17.117�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:17.43�[0m �[1mSTEP:�[0m Creating a service in the namespace �[38;5;243m01/24/23 20:09:17.632�[0m �[1mSTEP:�[0m Deleting the namespace �[38;5;243m01/24/23 20:09:17.75�[0m �[1mSTEP:�[0m Waiting for the namespace to be removed. �[38;5;243m01/24/23 20:09:17.857�[0m �[1mSTEP:�[0m Recreating the namespace �[38;5;243m01/24/23 20:09:23.959�[0m �[1mSTEP:�[0m Verifying there is no service in the namespace �[38;5;243m01/24/23 20:09:24.269�[0m [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 24 20:09:24.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-8352" for this suite. �[38;5;243m01/24/23 20:09:24.478�[0m �[1mSTEP:�[0m Destroying namespace "nsdeletetest-8642" for this suite. �[38;5;243m01/24/23 20:09:24.585�[0m Jan 24 20:09:24.687: INFO: Namespace nsdeletetest-8642 was already deleted �[1mSTEP:�[0m Destroying namespace "nsdeletetest-5495" for this suite. �[38;5;243m01/24/23 20:09:24.687�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] �[38;5;243mAllocatable node memory�[0m �[1mshould be equal to a calculated allocatable memory value�[0m �[38;5;243mtest/e2e/windows/memory_limits.go:54�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:09:24.805�[0m Jan 24 20:09:24.805: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename memory-limit-test-windows �[38;5;243m01/24/23 20:09:24.806�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:25.118�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:09:25.322�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should be equal to a calculated allocatable memory value test/e2e/windows/memory_limits.go:54 �[1mSTEP:�[0m Getting memory details from node status and kubelet config �[38;5;243m01/24/23 20:09:25.63�[0m Jan 24 20:09:25.630: INFO: Getting configuration details for node capz-conf-jzg2c Jan 24 20:09:25.747: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}} �[1mSTEP:�[0m Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>} BinarySI} �[38;5;243m01/24/23 20:09:25.747�[0m [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 24 20:09:25.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "memory-limit-test-windows-1062" for this suite. �[38;5;243m01/24/23 20:09:25.854�[0m {"msg":"PASSED [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] Allocatable node memory should be equal to a calculated allocatable memory value","completed":19,"skipped":1334,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [1.161 seconds]�[0m [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m Allocatable node memory �[38;5;243mtest/e2e/windows/memory_limits.go:53�[0m should be equal to a calculated allocatable memory value �[38;5;243mtest/e2e/windows/memory_limits.go:54�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:09:24.805�[0m Jan 24 20:09:24.805: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename memory-limit-test-windows �[38;5;243m01/24/23 20:09:24.806�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:25.118�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:09:25.322�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should be equal to a calculated allocatable memory value test/e2e/windows/memory_limits.go:54 �[1mSTEP:�[0m Getting memory details from node status and kubelet config �[38;5;243m01/24/23 20:09:25.63�[0m Jan 24 20:09:25.630: INFO: Getting configuration details for node capz-conf-jzg2c Jan 24 20:09:25.747: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}} �[1mSTEP:�[0m Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>} BinarySI} �[38;5;243m01/24/23 20:09:25.747�[0m [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 24 20:09:25.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "memory-limit-test-windows-1062" for this suite. �[38;5;243m01/24/23 20:09:25.854�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] �[38;5;243mattempt to deploy past allocatable memory limits�[0m �[1mshould fail deployments of pods once there isn't enough memory�[0m �[38;5;243mtest/e2e/windows/memory_limits.go:60�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:09:25.968�[0m Jan 24 20:09:25.968: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename memory-limit-test-windows �[38;5;243m01/24/23 20:09:25.969�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:26.28�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:09:26.483�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Jan 24 20:09:27.225: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 24 20:09:27.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "memory-limit-test-windows-8172" for this suite. �[38;5;243m01/24/23 20:09:27.331�[0m {"msg":"PASSED [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits should fail deployments of pods once there isn't enough memory","completed":20,"skipped":1357,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [1.471 seconds]�[0m [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m attempt to deploy past allocatable memory limits �[38;5;243mtest/e2e/windows/memory_limits.go:59�[0m should fail deployments of pods once there isn't enough memory �[38;5;243mtest/e2e/windows/memory_limits.go:60�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:09:25.968�[0m Jan 24 20:09:25.968: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename memory-limit-test-windows �[38;5;243m01/24/23 20:09:25.969�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:26.28�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:09:26.483�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Jan 24 20:09:27.225: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 24 20:09:27.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "memory-limit-test-windows-8172" for this suite. �[38;5;243m01/24/23 20:09:27.331�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] Deployment�[0m �[1mShould scale from 5 pods to 3 pods and from 3 to 1�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:43�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:09:27.443�[0m Jan 24 20:09:27.443: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 20:09:27.444�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:27.757�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:09:27.96�[0m [It] Should scale from 5 pods to 3 pods and from 3 to 1 test/e2e/autoscaling/horizontal_pod_autoscaling.go:43 �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 5 replicas �[38;5;243m01/24/23 20:09:28.163�[0m �[1mSTEP:�[0m creating deployment test-deployment in namespace horizontal-pod-autoscaling-531 �[38;5;243m01/24/23 20:09:28.279�[0m I0124 20:09:28.393382 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-531, replica count: 5 I0124 20:09:38.544857 14 runners.go:193] test-deployment Pods: 5 out of 5 created, 0 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 20:09:48.547572 14 runners.go:193] test-deployment Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 20:09:48.547�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-531 �[38;5;243m01/24/23 20:09:48.669�[0m I0124 20:09:48.778050 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-531, replica count: 1 I0124 20:09:58.929339 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 20:10:03.932: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Jan 24 20:10:04.035: INFO: RC test-deployment: consume 325 millicores in total Jan 24 20:10:04.035: INFO: RC test-deployment: setting consumption to 325 millicores in total Jan 24 20:10:04.035: INFO: RC test-deployment: consume 0 MB in total Jan 24 20:10:04.035: INFO: RC test-deployment: consume custom metric 0 in total Jan 24 20:10:04.035: INFO: RC test-deployment: disabling consumption of custom metric QPS Jan 24 20:10:04.035: INFO: RC test-deployment: disabling mem consumption Jan 24 20:10:04.250: INFO: waiting for 3 replicas (current: 5) Jan 24 20:10:24.353: INFO: waiting for 3 replicas (current: 5) Jan 24 20:10:34.035: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:10:34.035: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:10:44.353: INFO: waiting for 3 replicas (current: 5) Jan 24 20:11:04.184: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:11:04.185: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:11:04.354: INFO: waiting for 3 replicas (current: 5) Jan 24 20:11:24.359: INFO: waiting for 3 replicas (current: 5) Jan 24 20:11:34.298: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:11:34.298: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:11:44.354: INFO: waiting for 3 replicas (current: 5) Jan 24 20:12:04.353: INFO: waiting for 3 replicas (current: 5) Jan 24 20:12:04.414: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:12:04.415: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:12:24.355: INFO: waiting for 3 replicas (current: 5) Jan 24 20:12:34.529: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:12:34.529: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:12:44.354: INFO: waiting for 3 replicas (current: 5) Jan 24 20:13:04.357: INFO: waiting for 3 replicas (current: 5) Jan 24 20:13:04.643: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:13:04.644: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:13:24.355: INFO: waiting for 3 replicas (current: 5) Jan 24 20:13:34.754: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:13:34.754: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:13:44.356: INFO: waiting for 3 replicas (current: 5) Jan 24 20:14:04.353: INFO: waiting for 3 replicas (current: 5) Jan 24 20:14:04.866: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:14:04.867: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:14:24.354: INFO: waiting for 3 replicas (current: 5) Jan 24 20:14:34.979: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:14:34.979: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:14:44.356: INFO: waiting for 3 replicas (current: 5) Jan 24 20:15:04.354: INFO: waiting for 3 replicas (current: 5) Jan 24 20:15:05.091: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:15:05.091: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:15:24.356: INFO: waiting for 3 replicas (current: 3) Jan 24 20:15:24.356: INFO: RC test-deployment: consume 10 millicores in total Jan 24 20:15:24.356: INFO: RC test-deployment: setting consumption to 10 millicores in total Jan 24 20:15:24.458: INFO: waiting for 1 replicas (current: 3) Jan 24 20:15:35.204: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:15:35.204: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:15:44.563: INFO: waiting for 1 replicas (current: 3) Jan 24 20:16:04.563: INFO: waiting for 1 replicas (current: 3) Jan 24 20:16:05.325: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:16:05.325: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:16:24.564: INFO: waiting for 1 replicas (current: 3) Jan 24 20:16:35.436: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:16:35.436: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:16:44.565: INFO: waiting for 1 replicas (current: 3) Jan 24 20:17:04.562: INFO: waiting for 1 replicas (current: 3) Jan 24 20:17:05.548: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:17:05.548: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:17:24.564: INFO: waiting for 1 replicas (current: 3) Jan 24 20:17:35.660: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:17:35.660: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:17:44.564: INFO: waiting for 1 replicas (current: 3) Jan 24 20:18:04.563: INFO: waiting for 1 replicas (current: 3) Jan 24 20:18:05.773: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:18:05.773: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:18:24.563: INFO: waiting for 1 replicas (current: 3) Jan 24 20:18:35.884: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:18:35.884: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:18:44.565: INFO: waiting for 1 replicas (current: 3) Jan 24 20:19:04.563: INFO: waiting for 1 replicas (current: 3) Jan 24 20:19:05.998: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:19:05.998: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:19:24.565: INFO: waiting for 1 replicas (current: 3) Jan 24 20:19:36.117: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:19:36.117: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:19:44.562: INFO: waiting for 1 replicas (current: 3) Jan 24 20:20:04.565: INFO: waiting for 1 replicas (current: 3) Jan 24 20:20:06.229: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:20:06.229: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:20:24.562: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m01/24/23 20:20:24.671�[0m Jan 24 20:20:24.671: INFO: RC test-deployment: stopping metric consumer Jan 24 20:20:24.671: INFO: RC test-deployment: stopping CPU consumer Jan 24 20:20:24.671: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-531, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:20:34.672�[0m Jan 24 20:20:35.036: INFO: Deleting Deployment.apps test-deployment took: 108.937295ms Jan 24 20:20:35.137: INFO: Terminating Deployment.apps test-deployment pods took: 100.523413ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-531, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:20:37.27�[0m Jan 24 20:20:37.631: INFO: Deleting ReplicationController test-deployment-ctrl took: 107.333058ms Jan 24 20:20:37.731: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.30236ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 20:20:40.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-531" for this suite. �[38;5;243m01/24/23 20:20:40.179�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","completed":21,"skipped":1415,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [672.843 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:38�[0m Should scale from 5 pods to 3 pods and from 3 to 1 �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:43�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:09:27.443�[0m Jan 24 20:09:27.443: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 20:09:27.444�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:09:27.757�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:09:27.96�[0m [It] Should scale from 5 pods to 3 pods and from 3 to 1 test/e2e/autoscaling/horizontal_pod_autoscaling.go:43 �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 5 replicas �[38;5;243m01/24/23 20:09:28.163�[0m �[1mSTEP:�[0m creating deployment test-deployment in namespace horizontal-pod-autoscaling-531 �[38;5;243m01/24/23 20:09:28.279�[0m I0124 20:09:28.393382 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-531, replica count: 5 I0124 20:09:38.544857 14 runners.go:193] test-deployment Pods: 5 out of 5 created, 0 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 20:09:48.547572 14 runners.go:193] test-deployment Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 20:09:48.547�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-531 �[38;5;243m01/24/23 20:09:48.669�[0m I0124 20:09:48.778050 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-531, replica count: 1 I0124 20:09:58.929339 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 20:10:03.932: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Jan 24 20:10:04.035: INFO: RC test-deployment: consume 325 millicores in total Jan 24 20:10:04.035: INFO: RC test-deployment: setting consumption to 325 millicores in total Jan 24 20:10:04.035: INFO: RC test-deployment: consume 0 MB in total Jan 24 20:10:04.035: INFO: RC test-deployment: consume custom metric 0 in total Jan 24 20:10:04.035: INFO: RC test-deployment: disabling consumption of custom metric QPS Jan 24 20:10:04.035: INFO: RC test-deployment: disabling mem consumption Jan 24 20:10:04.250: INFO: waiting for 3 replicas (current: 5) Jan 24 20:10:24.353: INFO: waiting for 3 replicas (current: 5) Jan 24 20:10:34.035: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:10:34.035: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:10:44.353: INFO: waiting for 3 replicas (current: 5) Jan 24 20:11:04.184: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:11:04.185: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:11:04.354: INFO: waiting for 3 replicas (current: 5) Jan 24 20:11:24.359: INFO: waiting for 3 replicas (current: 5) Jan 24 20:11:34.298: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:11:34.298: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:11:44.354: INFO: waiting for 3 replicas (current: 5) Jan 24 20:12:04.353: INFO: waiting for 3 replicas (current: 5) Jan 24 20:12:04.414: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:12:04.415: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:12:24.355: INFO: waiting for 3 replicas (current: 5) Jan 24 20:12:34.529: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:12:34.529: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:12:44.354: INFO: waiting for 3 replicas (current: 5) Jan 24 20:13:04.357: INFO: waiting for 3 replicas (current: 5) Jan 24 20:13:04.643: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:13:04.644: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:13:24.355: INFO: waiting for 3 replicas (current: 5) Jan 24 20:13:34.754: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:13:34.754: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:13:44.356: INFO: waiting for 3 replicas (current: 5) Jan 24 20:14:04.353: INFO: waiting for 3 replicas (current: 5) Jan 24 20:14:04.866: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:14:04.867: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:14:24.354: INFO: waiting for 3 replicas (current: 5) Jan 24 20:14:34.979: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:14:34.979: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:14:44.356: INFO: waiting for 3 replicas (current: 5) Jan 24 20:15:04.354: INFO: waiting for 3 replicas (current: 5) Jan 24 20:15:05.091: INFO: RC test-deployment: sending request to consume 325 millicores Jan 24 20:15:05.091: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:15:24.356: INFO: waiting for 3 replicas (current: 3) Jan 24 20:15:24.356: INFO: RC test-deployment: consume 10 millicores in total Jan 24 20:15:24.356: INFO: RC test-deployment: setting consumption to 10 millicores in total Jan 24 20:15:24.458: INFO: waiting for 1 replicas (current: 3) Jan 24 20:15:35.204: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:15:35.204: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:15:44.563: INFO: waiting for 1 replicas (current: 3) Jan 24 20:16:04.563: INFO: waiting for 1 replicas (current: 3) Jan 24 20:16:05.325: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:16:05.325: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:16:24.564: INFO: waiting for 1 replicas (current: 3) Jan 24 20:16:35.436: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:16:35.436: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:16:44.565: INFO: waiting for 1 replicas (current: 3) Jan 24 20:17:04.562: INFO: waiting for 1 replicas (current: 3) Jan 24 20:17:05.548: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:17:05.548: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:17:24.564: INFO: waiting for 1 replicas (current: 3) Jan 24 20:17:35.660: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:17:35.660: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:17:44.564: INFO: waiting for 1 replicas (current: 3) Jan 24 20:18:04.563: INFO: waiting for 1 replicas (current: 3) Jan 24 20:18:05.773: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:18:05.773: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:18:24.563: INFO: waiting for 1 replicas (current: 3) Jan 24 20:18:35.884: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:18:35.884: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:18:44.565: INFO: waiting for 1 replicas (current: 3) Jan 24 20:19:04.563: INFO: waiting for 1 replicas (current: 3) Jan 24 20:19:05.998: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:19:05.998: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:19:24.565: INFO: waiting for 1 replicas (current: 3) Jan 24 20:19:36.117: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:19:36.117: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:19:44.562: INFO: waiting for 1 replicas (current: 3) Jan 24 20:20:04.565: INFO: waiting for 1 replicas (current: 3) Jan 24 20:20:06.229: INFO: RC test-deployment: sending request to consume 10 millicores Jan 24 20:20:06.229: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-531/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:20:24.562: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m01/24/23 20:20:24.671�[0m Jan 24 20:20:24.671: INFO: RC test-deployment: stopping metric consumer Jan 24 20:20:24.671: INFO: RC test-deployment: stopping CPU consumer Jan 24 20:20:24.671: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-531, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:20:34.672�[0m Jan 24 20:20:35.036: INFO: Deleting Deployment.apps test-deployment took: 108.937295ms Jan 24 20:20:35.137: INFO: Terminating Deployment.apps test-deployment pods took: 100.523413ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-531, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:20:37.27�[0m Jan 24 20:20:37.631: INFO: Deleting ReplicationController test-deployment-ctrl took: 107.333058ms Jan 24 20:20:37.731: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.30236ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 20:20:40.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-531" for this suite. �[38;5;243m01/24/23 20:20:40.179�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith autoscaling disabled�[0m �[1mshouldn't scale up�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:20:40.287�[0m Jan 24 20:20:40.287: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 20:20:40.289�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:20:40.6�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:20:40.803�[0m [It] shouldn't scale up test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/24/23 20:20:41.006�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m01/24/23 20:20:41.007�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-2337 �[38;5;243m01/24/23 20:20:41.129�[0m I0124 20:20:41.236112 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-2337, replica count: 1 I0124 20:20:51.387961 14 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 20:20:51.388�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-2337 �[38;5;243m01/24/23 20:20:51.503�[0m I0124 20:20:51.616587 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-2337, replica count: 1 I0124 20:21:01.769197 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 20:21:06.769: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 24 20:21:06.872: INFO: RC consumer: consume 110 millicores in total Jan 24 20:21:06.872: INFO: RC consumer: setting consumption to 110 millicores in total Jan 24 20:21:06.872: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:21:06.872: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:21:06.872: INFO: RC consumer: consume 0 MB in total Jan 24 20:21:06.872: INFO: RC consumer: disabling mem consumption Jan 24 20:21:06.872: INFO: RC consumer: consume custom metric 0 in total Jan 24 20:21:06.872: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m trying to trigger scale up �[38;5;243m01/24/23 20:21:06.982�[0m Jan 24 20:21:06.983: INFO: RC consumer: consume 880 millicores in total Jan 24 20:21:07.077: INFO: RC consumer: setting consumption to 880 millicores in total Jan 24 20:21:07.182: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:07.284: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 20:21:17.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:17.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 20:21:27.392: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:27.495: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aec4c0} Jan 24 20:21:37.078: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:21:37.078: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:21:37.389: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:37.492: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dba0e0} Jan 24 20:21:47.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:47.492: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aecb80} Jan 24 20:21:57.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:57.490: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aece30} Jan 24 20:22:07.216: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:22:07.216: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:22:07.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:07.490: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00292c5d0} Jan 24 20:22:17.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:17.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aed250} Jan 24 20:22:27.391: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:27.493: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aed4f0} Jan 24 20:22:37.395: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:37.498: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aed5d0} Jan 24 20:22:37.806: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:22:37.806: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:22:47.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:47.499: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aed880} Jan 24 20:22:57.389: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:57.492: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dba870} Jan 24 20:23:07.387: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:07.490: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dbab40} Jan 24 20:23:08.651: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:23:08.651: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:23:17.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:17.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dba0d0} Jan 24 20:23:27.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:27.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aec2d0} Jan 24 20:23:37.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:37.493: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aec5c0} Jan 24 20:23:39.535: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:23:39.535: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:23:47.389: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:47.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00292c240} Jan 24 20:23:57.395: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:57.498: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00292cc40} Jan 24 20:24:07.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:24:07.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dba5b0} Jan 24 20:24:09.821: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:24:09.821: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:24:17.387: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:24:17.490: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dba860} Jan 24 20:24:27.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:24:27.490: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00292dc40} Jan 24 20:24:37.386: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:24:37.489: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002d5a050} Jan 24 20:24:37.591: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:24:37.699: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002d5a510} Jan 24 20:24:37.699: INFO: Number of replicas was stable over 3m30s �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m01/24/23 20:24:37.699�[0m Jan 24 20:24:37.700: INFO: time waited for scale up: 3m30.622335873s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m01/24/23 20:24:37.7�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/24/23 20:24:37.916�[0m Jan 24 20:24:37.916: INFO: RC consumer: stopping metric consumer Jan 24 20:24:37.916: INFO: RC consumer: stopping CPU consumer Jan 24 20:24:37.916: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-2337, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:24:47.918�[0m Jan 24 20:24:48.281: INFO: Deleting Deployment.apps consumer took: 109.633507ms Jan 24 20:24:48.381: INFO: Terminating Deployment.apps consumer pods took: 100.2128ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-2337, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:24:50.405�[0m Jan 24 20:24:50.767: INFO: Deleting ReplicationController consumer-ctrl took: 106.568358ms Jan 24 20:24:50.868: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.574156ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 24 20:24:52.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2337" for this suite. �[38;5;243m01/24/23 20:24:53.008�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up","completed":22,"skipped":1415,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [252.827 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with autoscaling disabled �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:136�[0m shouldn't scale up �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:20:40.287�[0m Jan 24 20:20:40.287: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 20:20:40.289�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:20:40.6�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:20:40.803�[0m [It] shouldn't scale up test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/24/23 20:20:41.006�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m01/24/23 20:20:41.007�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-2337 �[38;5;243m01/24/23 20:20:41.129�[0m I0124 20:20:41.236112 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-2337, replica count: 1 I0124 20:20:51.387961 14 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 20:20:51.388�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-2337 �[38;5;243m01/24/23 20:20:51.503�[0m I0124 20:20:51.616587 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-2337, replica count: 1 I0124 20:21:01.769197 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 20:21:06.769: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 24 20:21:06.872: INFO: RC consumer: consume 110 millicores in total Jan 24 20:21:06.872: INFO: RC consumer: setting consumption to 110 millicores in total Jan 24 20:21:06.872: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:21:06.872: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:21:06.872: INFO: RC consumer: consume 0 MB in total Jan 24 20:21:06.872: INFO: RC consumer: disabling mem consumption Jan 24 20:21:06.872: INFO: RC consumer: consume custom metric 0 in total Jan 24 20:21:06.872: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m trying to trigger scale up �[38;5;243m01/24/23 20:21:06.982�[0m Jan 24 20:21:06.983: INFO: RC consumer: consume 880 millicores in total Jan 24 20:21:07.077: INFO: RC consumer: setting consumption to 880 millicores in total Jan 24 20:21:07.182: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:07.284: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 20:21:17.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:17.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 20:21:27.392: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:27.495: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aec4c0} Jan 24 20:21:37.078: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:21:37.078: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:21:37.389: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:37.492: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dba0e0} Jan 24 20:21:47.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:47.492: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aecb80} Jan 24 20:21:57.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:21:57.490: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aece30} Jan 24 20:22:07.216: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:22:07.216: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:22:07.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:07.490: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00292c5d0} Jan 24 20:22:17.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:17.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aed250} Jan 24 20:22:27.391: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:27.493: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aed4f0} Jan 24 20:22:37.395: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:37.498: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aed5d0} Jan 24 20:22:37.806: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:22:37.806: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:22:47.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:47.499: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aed880} Jan 24 20:22:57.389: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:22:57.492: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dba870} Jan 24 20:23:07.387: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:07.490: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dbab40} Jan 24 20:23:08.651: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:23:08.651: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:23:17.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:17.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dba0d0} Jan 24 20:23:27.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:27.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aec2d0} Jan 24 20:23:37.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:37.493: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002aec5c0} Jan 24 20:23:39.535: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:23:39.535: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:23:47.389: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:47.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00292c240} Jan 24 20:23:57.395: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:23:57.498: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00292cc40} Jan 24 20:24:07.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:24:07.491: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dba5b0} Jan 24 20:24:09.821: INFO: RC consumer: sending request to consume 880 millicores Jan 24 20:24:09.821: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2337/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Jan 24 20:24:17.387: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:24:17.490: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002dba860} Jan 24 20:24:27.388: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:24:27.490: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc00292dc40} Jan 24 20:24:37.386: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:24:37.489: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002d5a050} Jan 24 20:24:37.591: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 20:24:37.699: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc002d5a510} Jan 24 20:24:37.699: INFO: Number of replicas was stable over 3m30s �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m01/24/23 20:24:37.699�[0m Jan 24 20:24:37.700: INFO: time waited for scale up: 3m30.622335873s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m01/24/23 20:24:37.7�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/24/23 20:24:37.916�[0m Jan 24 20:24:37.916: INFO: RC consumer: stopping metric consumer Jan 24 20:24:37.916: INFO: RC consumer: stopping CPU consumer Jan 24 20:24:37.916: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-2337, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:24:47.918�[0m Jan 24 20:24:48.281: INFO: Deleting Deployment.apps consumer took: 109.633507ms Jan 24 20:24:48.381: INFO: Terminating Deployment.apps consumer pods took: 100.2128ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-2337, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:24:50.405�[0m Jan 24 20:24:50.767: INFO: Deleting ReplicationController consumer-ctrl took: 106.568358ms Jan 24 20:24:50.868: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.574156ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 24 20:24:52.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2337" for this suite. �[38;5;243m01/24/23 20:24:53.008�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith autoscaling disabled�[0m �[1mshouldn't scale down�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:172�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:24:53.134�[0m Jan 24 20:24:53.135: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 20:24:53.136�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:24:53.447�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:24:53.65�[0m [It] shouldn't scale down test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:172 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/24/23 20:24:53.853�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 3 replicas �[38;5;243m01/24/23 20:24:53.853�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-1621 �[38;5;243m01/24/23 20:24:53.973�[0m I0124 20:24:54.079434 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-1621, replica count: 3 I0124 20:25:04.230627 14 runners.go:193] consumer Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 20:25:04.23�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-1621 �[38;5;243m01/24/23 20:25:04.358�[0m I0124 20:25:04.471115 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-1621, replica count: 1 I0124 20:25:14.622993 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 20:25:19.623: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 24 20:25:19.726: INFO: RC consumer: consume 330 millicores in total Jan 24 20:25:19.726: INFO: RC consumer: setting consumption to 330 millicores in total Jan 24 20:25:19.726: INFO: RC consumer: sending request to consume 330 millicores Jan 24 20:25:19.726: INFO: RC consumer: consume 0 MB in total Jan 24 20:25:19.726: INFO: RC consumer: consume custom metric 0 in total Jan 24 20:25:19.726: INFO: RC consumer: disabling consumption of custom metric QPS Jan 24 20:25:19.726: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 20:25:19.726: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m trying to trigger scale down �[38;5;243m01/24/23 20:25:19.842�[0m Jan 24 20:25:19.842: INFO: RC consumer: consume 110 millicores in total Jan 24 20:25:19.927: INFO: RC consumer: setting consumption to 110 millicores in total Jan 24 20:25:20.030: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:25:20.148: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 20:25:30.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:25:30.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 20:25:40.253: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:25:40.356: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c5c0} Jan 24 20:25:49.928: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:25:49.928: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:25:50.253: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:25:50.355: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670aa0} Jan 24 20:26:00.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:00.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670b80} Jan 24 20:26:10.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:10.358: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c930} Jan 24 20:26:20.047: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:26:20.047: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:26:20.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:20.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e6c0} Jan 24 20:26:30.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:30.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e7a0} Jan 24 20:26:40.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:40.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365eaa0} Jan 24 20:26:50.166: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:26:50.166: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:26:50.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:50.353: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399d190} Jan 24 20:27:00.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:00.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365ec80} Jan 24 20:27:10.253: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:10.356: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365ef30} Jan 24 20:27:20.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:20.277: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:27:20.277: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:27:20.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026703d0} Jan 24 20:27:30.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:30.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670550} Jan 24 20:27:40.256: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:40.358: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c350} Jan 24 20:27:50.256: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:50.359: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e2e0} Jan 24 20:27:50.387: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:27:50.387: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:28:00.257: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:00.359: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e7b0} Jan 24 20:28:10.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:10.361: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c680} Jan 24 20:28:20.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:20.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670a70} Jan 24 20:28:20.498: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:28:20.498: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:28:30.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:30.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c9c0} Jan 24 20:28:40.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:40.359: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365ecc0} Jan 24 20:28:50.260: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:50.362: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365ef70} Jan 24 20:28:50.609: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:28:50.609: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:29:00.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:00.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399ccf0} Jan 24 20:29:10.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:10.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670ea0} Jan 24 20:29:20.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:20.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e2f0} Jan 24 20:29:20.722: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:29:20.722: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:29:30.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:30.360: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c0c0} Jan 24 20:29:40.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:40.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c3b0} Jan 24 20:29:50.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:50.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c4a0} Jan 24 20:29:50.835: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:29:50.835: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:30:00.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:00.356: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026706d0} Jan 24 20:30:10.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:10.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e7c0} Jan 24 20:30:20.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:20.353: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026709d0} Jan 24 20:30:20.951: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:30:20.951: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:30:30.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:30.356: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365ead0} Jan 24 20:30:40.253: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:40.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c900} Jan 24 20:30:50.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:50.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399cba0} Jan 24 20:30:51.064: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:30:51.064: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:31:00.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:00.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399ce40} Jan 24 20:31:10.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:10.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399d120} Jan 24 20:31:20.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:20.355: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026700a0} Jan 24 20:31:21.175: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:31:21.175: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:31:30.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:30.353: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e510} Jan 24 20:31:40.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:40.359: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670550} Jan 24 20:31:50.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:50.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002998650} Jan 24 20:31:51.286: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:31:51.286: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:32:00.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:00.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e830} Jan 24 20:32:10.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:10.358: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365eb40} Jan 24 20:32:20.264: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:20.367: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c540} Jan 24 20:32:21.399: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:32:21.399: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:32:30.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:30.358: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002998a20} Jan 24 20:32:40.256: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:40.358: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002999060} Jan 24 20:32:50.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:50.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002999750} Jan 24 20:32:50.456: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:50.559: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002999cf0} Jan 24 20:32:50.559: INFO: Number of replicas was stable over 7m30s �[1mSTEP:�[0m verifying time waited for a scale down �[38;5;243m01/24/23 20:32:50.559�[0m Jan 24 20:32:50.559: INFO: time waited for scale down: 7m30.631131856s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m01/24/23 20:32:50.559�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/24/23 20:32:50.779�[0m Jan 24 20:32:50.779: INFO: RC consumer: stopping metric consumer Jan 24 20:32:50.779: INFO: RC consumer: stopping CPU consumer Jan 24 20:32:50.779: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-1621, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:33:00.783�[0m Jan 24 20:33:01.145: INFO: Deleting Deployment.apps consumer took: 108.514707ms Jan 24 20:33:01.246: INFO: Terminating Deployment.apps consumer pods took: 100.952721ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-1621, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:33:03.788�[0m Jan 24 20:33:04.159: INFO: Deleting ReplicationController consumer-ctrl took: 110.826479ms Jan 24 20:33:04.260: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.654113ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 24 20:33:06.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1621" for this suite. �[38;5;243m01/24/23 20:33:06.928�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down","completed":23,"skipped":1769,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [493.901 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with autoscaling disabled �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:136�[0m shouldn't scale down �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:172�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:24:53.134�[0m Jan 24 20:24:53.135: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 20:24:53.136�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:24:53.447�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:24:53.65�[0m [It] shouldn't scale down test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:172 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/24/23 20:24:53.853�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 3 replicas �[38;5;243m01/24/23 20:24:53.853�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-1621 �[38;5;243m01/24/23 20:24:53.973�[0m I0124 20:24:54.079434 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-1621, replica count: 3 I0124 20:25:04.230627 14 runners.go:193] consumer Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 20:25:04.23�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-1621 �[38;5;243m01/24/23 20:25:04.358�[0m I0124 20:25:04.471115 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-1621, replica count: 1 I0124 20:25:14.622993 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 20:25:19.623: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 24 20:25:19.726: INFO: RC consumer: consume 330 millicores in total Jan 24 20:25:19.726: INFO: RC consumer: setting consumption to 330 millicores in total Jan 24 20:25:19.726: INFO: RC consumer: sending request to consume 330 millicores Jan 24 20:25:19.726: INFO: RC consumer: consume 0 MB in total Jan 24 20:25:19.726: INFO: RC consumer: consume custom metric 0 in total Jan 24 20:25:19.726: INFO: RC consumer: disabling consumption of custom metric QPS Jan 24 20:25:19.726: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 20:25:19.726: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m trying to trigger scale down �[38;5;243m01/24/23 20:25:19.842�[0m Jan 24 20:25:19.842: INFO: RC consumer: consume 110 millicores in total Jan 24 20:25:19.927: INFO: RC consumer: setting consumption to 110 millicores in total Jan 24 20:25:20.030: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:25:20.148: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 20:25:30.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:25:30.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 20:25:40.253: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:25:40.356: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c5c0} Jan 24 20:25:49.928: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:25:49.928: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:25:50.253: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:25:50.355: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670aa0} Jan 24 20:26:00.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:00.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670b80} Jan 24 20:26:10.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:10.358: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c930} Jan 24 20:26:20.047: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:26:20.047: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:26:20.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:20.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e6c0} Jan 24 20:26:30.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:30.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e7a0} Jan 24 20:26:40.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:40.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365eaa0} Jan 24 20:26:50.166: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:26:50.166: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:26:50.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:26:50.353: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399d190} Jan 24 20:27:00.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:00.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365ec80} Jan 24 20:27:10.253: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:10.356: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365ef30} Jan 24 20:27:20.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:20.277: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:27:20.277: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:27:20.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026703d0} Jan 24 20:27:30.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:30.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670550} Jan 24 20:27:40.256: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:40.358: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c350} Jan 24 20:27:50.256: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:27:50.359: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e2e0} Jan 24 20:27:50.387: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:27:50.387: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:28:00.257: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:00.359: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e7b0} Jan 24 20:28:10.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:10.361: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c680} Jan 24 20:28:20.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:20.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670a70} Jan 24 20:28:20.498: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:28:20.498: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:28:30.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:30.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c9c0} Jan 24 20:28:40.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:40.359: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365ecc0} Jan 24 20:28:50.260: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:28:50.362: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365ef70} Jan 24 20:28:50.609: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:28:50.609: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:29:00.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:00.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399ccf0} Jan 24 20:29:10.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:10.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670ea0} Jan 24 20:29:20.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:20.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e2f0} Jan 24 20:29:20.722: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:29:20.722: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:29:30.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:30.360: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c0c0} Jan 24 20:29:40.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:40.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c3b0} Jan 24 20:29:50.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:29:50.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c4a0} Jan 24 20:29:50.835: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:29:50.835: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:30:00.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:00.356: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026706d0} Jan 24 20:30:10.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:10.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e7c0} Jan 24 20:30:20.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:20.353: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026709d0} Jan 24 20:30:20.951: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:30:20.951: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:30:30.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:30.356: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365ead0} Jan 24 20:30:40.253: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:40.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c900} Jan 24 20:30:50.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:30:50.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399cba0} Jan 24 20:30:51.064: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:30:51.064: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:31:00.254: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:00.357: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399ce40} Jan 24 20:31:10.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:10.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399d120} Jan 24 20:31:20.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:20.355: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026700a0} Jan 24 20:31:21.175: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:31:21.175: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:31:30.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:30.353: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e510} Jan 24 20:31:40.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:40.359: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002670550} Jan 24 20:31:50.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:31:50.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002998650} Jan 24 20:31:51.286: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:31:51.286: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:32:00.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:00.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365e830} Jan 24 20:32:10.255: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:10.358: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00365eb40} Jan 24 20:32:20.264: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:20.367: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00399c540} Jan 24 20:32:21.399: INFO: RC consumer: sending request to consume 110 millicores Jan 24 20:32:21.399: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1621/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 20:32:30.252: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:30.358: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002998a20} Jan 24 20:32:40.256: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:40.358: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002999060} Jan 24 20:32:50.251: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:50.354: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002999750} Jan 24 20:32:50.456: INFO: expecting there to be in [3, 3] replicas (are: 3) Jan 24 20:32:50.559: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002999cf0} Jan 24 20:32:50.559: INFO: Number of replicas was stable over 7m30s �[1mSTEP:�[0m verifying time waited for a scale down �[38;5;243m01/24/23 20:32:50.559�[0m Jan 24 20:32:50.559: INFO: time waited for scale down: 7m30.631131856s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m01/24/23 20:32:50.559�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/24/23 20:32:50.779�[0m Jan 24 20:32:50.779: INFO: RC consumer: stopping metric consumer Jan 24 20:32:50.779: INFO: RC consumer: stopping CPU consumer Jan 24 20:32:50.779: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-1621, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:33:00.783�[0m Jan 24 20:33:01.145: INFO: Deleting Deployment.apps consumer took: 108.514707ms Jan 24 20:33:01.246: INFO: Terminating Deployment.apps consumer pods took: 100.952721ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-1621, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:33:03.788�[0m Jan 24 20:33:04.159: INFO: Deleting ReplicationController consumer-ctrl took: 110.826479ms Jan 24 20:33:04.260: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.654113ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 24 20:33:06.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1621" for this suite. �[38;5;243m01/24/23 20:33:06.928�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould apply changes to a namespace status [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/namespace.go:298�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:33:07.04�[0m Jan 24 20:33:07.040: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/24/23 20:33:07.041�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:33:07.355�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:33:07.558�[0m [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:298 �[1mSTEP:�[0m Read namespace status �[38;5;243m01/24/23 20:33:07.761�[0m Jan 24 20:33:07.864: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} �[1mSTEP:�[0m Patch namespace status �[38;5;243m01/24/23 20:33:07.864�[0m Jan 24 20:33:07.972: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} �[1mSTEP:�[0m Update namespace status �[38;5;243m01/24/23 20:33:07.972�[0m Jan 24 20:33:08.184: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 24 20:33:08.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-7723" for this suite. �[38;5;243m01/24/23 20:33:08.29�[0m {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]","completed":24,"skipped":1798,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [1.368 seconds]�[0m [sig-api-machinery] Namespaces [Serial] �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should apply changes to a namespace status [Conformance] �[38;5;243mtest/e2e/apimachinery/namespace.go:298�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:33:07.04�[0m Jan 24 20:33:07.040: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/24/23 20:33:07.041�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:33:07.355�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:33:07.558�[0m [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:298 �[1mSTEP:�[0m Read namespace status �[38;5;243m01/24/23 20:33:07.761�[0m Jan 24 20:33:07.864: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} �[1mSTEP:�[0m Patch namespace status �[38;5;243m01/24/23 20:33:07.864�[0m Jan 24 20:33:07.972: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} �[1mSTEP:�[0m Update namespace status �[38;5;243m01/24/23 20:33:07.972�[0m Jan 24 20:33:08.184: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 24 20:33:08.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-7723" for this suite. �[38;5;243m01/24/23 20:33:08.29�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicationController�[0m �[1mShould scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:64�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:33:08.42�[0m Jan 24 20:33:08.420: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 20:33:08.421�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:33:08.74�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:33:08.942�[0m [It] Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:64 �[1mSTEP:�[0m Running consuming RC rc via /v1, Kind=ReplicationController with 5 replicas �[38;5;243m01/24/23 20:33:09.145�[0m �[1mSTEP:�[0m creating replication controller rc in namespace horizontal-pod-autoscaling-8025 �[38;5;243m01/24/23 20:33:09.266�[0m I0124 20:33:09.374568 14 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-8025, replica count: 5 I0124 20:33:19.525517 14 runners.go:193] rc Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 20:33:19.525�[0m �[1mSTEP:�[0m creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-8025 �[38;5;243m01/24/23 20:33:19.662�[0m I0124 20:33:19.770257 14 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-8025, replica count: 1 I0124 20:33:29.924171 14 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 20:33:34.924: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Jan 24 20:33:35.026: INFO: RC rc: consume 325 millicores in total Jan 24 20:33:35.026: INFO: RC rc: setting consumption to 325 millicores in total Jan 24 20:33:35.026: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:33:35.026: INFO: RC rc: consume 0 MB in total Jan 24 20:33:35.026: INFO: RC rc: disabling mem consumption Jan 24 20:33:35.026: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:33:35.026: INFO: RC rc: consume custom metric 0 in total Jan 24 20:33:35.027: INFO: RC rc: disabling consumption of custom metric QPS Jan 24 20:33:35.236: INFO: waiting for 3 replicas (current: 5) Jan 24 20:33:55.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:34:05.175: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:34:05.175: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:34:15.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:34:35.293: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:34:35.293: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:34:35.341: INFO: waiting for 3 replicas (current: 5) Jan 24 20:34:55.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:35:05.404: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:35:05.404: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:35:15.353: INFO: waiting for 3 replicas (current: 5) Jan 24 20:35:35.340: INFO: waiting for 3 replicas (current: 5) Jan 24 20:35:35.516: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:35:35.516: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:35:55.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:36:05.628: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:36:05.628: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:36:15.344: INFO: waiting for 3 replicas (current: 5) Jan 24 20:36:35.340: INFO: waiting for 3 replicas (current: 5) Jan 24 20:36:35.741: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:36:35.741: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:36:55.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:37:05.859: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:37:05.859: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:37:15.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:37:35.340: INFO: waiting for 3 replicas (current: 5) Jan 24 20:37:35.971: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:37:35.972: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:37:55.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:38:06.082: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:38:06.082: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:38:15.355: INFO: waiting for 3 replicas (current: 5) Jan 24 20:38:35.339: INFO: waiting for 3 replicas (current: 5) Jan 24 20:38:36.195: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:38:36.195: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:38:55.343: INFO: waiting for 3 replicas (current: 3) Jan 24 20:38:55.447: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:38:55.550: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f864d4} Jan 24 20:39:05.653: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:05.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac8bc} Jan 24 20:39:06.307: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:39:06.308: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:39:15.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:15.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e52c} Jan 24 20:39:25.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:25.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac07c} Jan 24 20:39:35.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:35.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac134} Jan 24 20:39:36.420: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:39:36.420: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:39:45.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:45.763: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f86194} Jan 24 20:39:55.653: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:55.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac64c} Jan 24 20:40:05.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:05.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e3ac} Jan 24 20:40:06.532: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:40:06.532: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:40:15.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:15.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391ec4c} Jan 24 20:40:25.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:25.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac9dc} Jan 24 20:40:35.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:35.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacbdc} Jan 24 20:40:36.649: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:40:36.649: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:40:45.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:45.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391f57c} Jan 24 20:40:55.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:55.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f867dc} Jan 24 20:41:05.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:05.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f868ac} Jan 24 20:41:06.764: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:41:06.764: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:41:15.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:15.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f8695c} Jan 24 20:41:25.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:25.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac07c} Jan 24 20:41:35.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:35.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e0b4} Jan 24 20:41:36.877: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:41:36.877: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:41:45.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:45.762: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac33c} Jan 24 20:41:55.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:55.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac5cc} Jan 24 20:42:05.653: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:05.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e3a4} Jan 24 20:42:06.988: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:42:06.989: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:42:15.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:15.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f862e4} Jan 24 20:42:25.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:25.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f8639c} Jan 24 20:42:35.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:35.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391eecc} Jan 24 20:42:37.101: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:42:37.101: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:42:45.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:45.762: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391f4a4} Jan 24 20:42:55.665: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:55.768: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f8673c} Jan 24 20:43:05.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:05.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391f6ac} Jan 24 20:43:07.213: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:43:07.213: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:43:15.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:15.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacfc4} Jan 24 20:43:25.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:25.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f868ec} Jan 24 20:43:35.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:35.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e25c} Jan 24 20:43:37.331: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:43:37.331: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:43:45.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:45.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac45c} Jan 24 20:43:55.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:55.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e81c} Jan 24 20:44:05.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:05.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391ec24} Jan 24 20:44:07.447: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:44:07.447: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:44:15.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:15.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac804} Jan 24 20:44:25.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:25.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac9f4} Jan 24 20:44:35.653: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:35.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dba08c} Jan 24 20:44:37.559: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:44:37.559: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:44:45.659: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:45.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391f8bc} Jan 24 20:44:55.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:55.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391fa9c} Jan 24 20:45:05.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:05.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacd8c} Jan 24 20:45:07.674: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:45:07.674: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:45:15.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:15.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacfa4} Jan 24 20:45:25.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:25.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391ff1c} Jan 24 20:45:35.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:35.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dba24c} Jan 24 20:45:37.787: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:45:37.787: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:45:45.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:45.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac45c} Jan 24 20:45:55.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:55.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac514} Jan 24 20:46:05.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:05.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac77c} Jan 24 20:46:07.899: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:46:07.899: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:46:15.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:15.762: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacbac} Jan 24 20:46:25.659: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:25.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e24c} Jan 24 20:46:35.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:35.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e80c} Jan 24 20:46:38.011: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:46:38.011: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:46:45.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:45.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacf64} Jan 24 20:46:55.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:55.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391f37c} Jan 24 20:47:05.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:05.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cad1ec} Jan 24 20:47:08.124: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:47:08.125: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:47:15.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:15.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cad60c} Jan 24 20:47:25.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:25.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cad72c} Jan 24 20:47:35.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:35.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dba1bc} Jan 24 20:47:38.237: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:47:38.237: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:47:45.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:45.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dba3bc} Jan 24 20:47:55.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:55.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac4cc} Jan 24 20:48:05.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:05.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac73c} Jan 24 20:48:08.351: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:48:08.351: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:48:15.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:15.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dba7fc} Jan 24 20:48:25.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:25.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac99c} Jan 24 20:48:35.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:35.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003caca6c} Jan 24 20:48:38.464: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:48:38.464: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:48:45.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:45.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dbad3c} Jan 24 20:48:55.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:55.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dbaf3c} Jan 24 20:48:55.863: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:55.965: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dbb184} Jan 24 20:48:55.965: INFO: Number of replicas was stable over 10m0s Jan 24 20:48:55.965: INFO: RC rc: consume 10 millicores in total Jan 24 20:48:55.965: INFO: RC rc: setting consumption to 10 millicores in total Jan 24 20:48:56.068: INFO: waiting for 1 replicas (current: 3) Jan 24 20:49:08.579: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:49:08.579: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:49:16.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:49:36.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:49:38.691: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:49:38.691: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:49:56.173: INFO: waiting for 1 replicas (current: 3) Jan 24 20:50:08.802: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:50:08.802: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:50:16.171: INFO: waiting for 1 replicas (current: 3) Jan 24 20:50:36.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:50:38.914: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:50:38.914: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:50:56.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:51:09.029: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:51:09.029: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:51:16.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:51:36.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:51:39.143: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:51:39.143: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:51:56.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:52:09.256: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:52:09.256: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:52:16.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:52:36.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:52:39.368: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:52:39.368: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:52:56.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:53:09.481: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:53:09.482: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:53:16.175: INFO: waiting for 1 replicas (current: 3) Jan 24 20:53:36.174: INFO: waiting for 1 replicas (current: 3) Jan 24 20:53:39.594: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:53:39.594: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:53:56.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:54:09.707: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:54:09.707: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:54:16.172: INFO: waiting for 1 replicas (current: 2) Jan 24 20:54:36.172: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc �[38;5;243m01/24/23 20:54:36.279�[0m Jan 24 20:54:36.279: INFO: RC rc: stopping metric consumer Jan 24 20:54:36.279: INFO: RC rc: stopping CPU consumer Jan 24 20:54:36.279: INFO: RC rc: stopping mem consumer �[1mSTEP:�[0m deleting ReplicationController rc in namespace horizontal-pod-autoscaling-8025, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:54:46.279�[0m Jan 24 20:54:46.640: INFO: Deleting ReplicationController rc took: 107.002084ms Jan 24 20:54:46.741: INFO: Terminating ReplicationController rc pods took: 100.167171ms �[1mSTEP:�[0m deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-8025, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:54:48.869�[0m Jan 24 20:54:49.228: INFO: Deleting ReplicationController rc-ctrl took: 106.262513ms Jan 24 20:54:49.328: INFO: Terminating ReplicationController rc-ctrl pods took: 100.769001ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 20:54:50.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-8025" for this suite. �[38;5;243m01/24/23 20:54:50.969�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","completed":25,"skipped":2007,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [1302.656 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicationController �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:59�[0m Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:64�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:33:08.42�[0m Jan 24 20:33:08.420: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 20:33:08.421�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:33:08.74�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:33:08.942�[0m [It] Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:64 �[1mSTEP:�[0m Running consuming RC rc via /v1, Kind=ReplicationController with 5 replicas �[38;5;243m01/24/23 20:33:09.145�[0m �[1mSTEP:�[0m creating replication controller rc in namespace horizontal-pod-autoscaling-8025 �[38;5;243m01/24/23 20:33:09.266�[0m I0124 20:33:09.374568 14 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-8025, replica count: 5 I0124 20:33:19.525517 14 runners.go:193] rc Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 20:33:19.525�[0m �[1mSTEP:�[0m creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-8025 �[38;5;243m01/24/23 20:33:19.662�[0m I0124 20:33:19.770257 14 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-8025, replica count: 1 I0124 20:33:29.924171 14 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 20:33:34.924: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Jan 24 20:33:35.026: INFO: RC rc: consume 325 millicores in total Jan 24 20:33:35.026: INFO: RC rc: setting consumption to 325 millicores in total Jan 24 20:33:35.026: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:33:35.026: INFO: RC rc: consume 0 MB in total Jan 24 20:33:35.026: INFO: RC rc: disabling mem consumption Jan 24 20:33:35.026: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:33:35.026: INFO: RC rc: consume custom metric 0 in total Jan 24 20:33:35.027: INFO: RC rc: disabling consumption of custom metric QPS Jan 24 20:33:35.236: INFO: waiting for 3 replicas (current: 5) Jan 24 20:33:55.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:34:05.175: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:34:05.175: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:34:15.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:34:35.293: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:34:35.293: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:34:35.341: INFO: waiting for 3 replicas (current: 5) Jan 24 20:34:55.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:35:05.404: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:35:05.404: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:35:15.353: INFO: waiting for 3 replicas (current: 5) Jan 24 20:35:35.340: INFO: waiting for 3 replicas (current: 5) Jan 24 20:35:35.516: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:35:35.516: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:35:55.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:36:05.628: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:36:05.628: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:36:15.344: INFO: waiting for 3 replicas (current: 5) Jan 24 20:36:35.340: INFO: waiting for 3 replicas (current: 5) Jan 24 20:36:35.741: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:36:35.741: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:36:55.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:37:05.859: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:37:05.859: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:37:15.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:37:35.340: INFO: waiting for 3 replicas (current: 5) Jan 24 20:37:35.971: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:37:35.972: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:37:55.343: INFO: waiting for 3 replicas (current: 5) Jan 24 20:38:06.082: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:38:06.082: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:38:15.355: INFO: waiting for 3 replicas (current: 5) Jan 24 20:38:35.339: INFO: waiting for 3 replicas (current: 5) Jan 24 20:38:36.195: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:38:36.195: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:38:55.343: INFO: waiting for 3 replicas (current: 3) Jan 24 20:38:55.447: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:38:55.550: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f864d4} Jan 24 20:39:05.653: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:05.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac8bc} Jan 24 20:39:06.307: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:39:06.308: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:39:15.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:15.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e52c} Jan 24 20:39:25.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:25.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac07c} Jan 24 20:39:35.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:35.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac134} Jan 24 20:39:36.420: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:39:36.420: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:39:45.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:45.763: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f86194} Jan 24 20:39:55.653: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:39:55.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac64c} Jan 24 20:40:05.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:05.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e3ac} Jan 24 20:40:06.532: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:40:06.532: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:40:15.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:15.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391ec4c} Jan 24 20:40:25.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:25.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac9dc} Jan 24 20:40:35.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:35.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacbdc} Jan 24 20:40:36.649: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:40:36.649: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:40:45.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:45.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391f57c} Jan 24 20:40:55.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:40:55.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f867dc} Jan 24 20:41:05.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:05.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f868ac} Jan 24 20:41:06.764: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:41:06.764: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:41:15.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:15.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f8695c} Jan 24 20:41:25.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:25.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac07c} Jan 24 20:41:35.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:35.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e0b4} Jan 24 20:41:36.877: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:41:36.877: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:41:45.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:45.762: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac33c} Jan 24 20:41:55.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:41:55.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac5cc} Jan 24 20:42:05.653: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:05.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e3a4} Jan 24 20:42:06.988: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:42:06.989: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:42:15.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:15.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f862e4} Jan 24 20:42:25.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:25.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f8639c} Jan 24 20:42:35.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:35.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391eecc} Jan 24 20:42:37.101: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:42:37.101: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:42:45.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:45.762: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391f4a4} Jan 24 20:42:55.665: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:42:55.768: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f8673c} Jan 24 20:43:05.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:05.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391f6ac} Jan 24 20:43:07.213: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:43:07.213: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:43:15.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:15.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacfc4} Jan 24 20:43:25.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:25.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002f868ec} Jan 24 20:43:35.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:35.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e25c} Jan 24 20:43:37.331: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:43:37.331: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:43:45.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:45.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac45c} Jan 24 20:43:55.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:43:55.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e81c} Jan 24 20:44:05.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:05.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391ec24} Jan 24 20:44:07.447: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:44:07.447: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:44:15.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:15.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac804} Jan 24 20:44:25.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:25.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac9f4} Jan 24 20:44:35.653: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:35.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dba08c} Jan 24 20:44:37.559: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:44:37.559: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:44:45.659: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:45.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391f8bc} Jan 24 20:44:55.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:44:55.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391fa9c} Jan 24 20:45:05.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:05.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacd8c} Jan 24 20:45:07.674: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:45:07.674: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:45:15.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:15.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacfa4} Jan 24 20:45:25.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:25.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391ff1c} Jan 24 20:45:35.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:35.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dba24c} Jan 24 20:45:37.787: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:45:37.787: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:45:45.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:45.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac45c} Jan 24 20:45:55.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:45:55.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac514} Jan 24 20:46:05.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:05.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac77c} Jan 24 20:46:07.899: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:46:07.899: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:46:15.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:15.762: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacbac} Jan 24 20:46:25.659: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:25.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e24c} Jan 24 20:46:35.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:35.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391e80c} Jan 24 20:46:38.011: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:46:38.011: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:46:45.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:45.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cacf64} Jan 24 20:46:55.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:46:55.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00391f37c} Jan 24 20:47:05.655: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:05.758: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cad1ec} Jan 24 20:47:08.124: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:47:08.125: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:47:15.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:15.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cad60c} Jan 24 20:47:25.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:25.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cad72c} Jan 24 20:47:35.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:35.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dba1bc} Jan 24 20:47:38.237: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:47:38.237: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:47:45.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:45.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dba3bc} Jan 24 20:47:55.654: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:47:55.757: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac4cc} Jan 24 20:48:05.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:05.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac73c} Jan 24 20:48:08.351: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:48:08.351: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:48:15.656: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:15.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dba7fc} Jan 24 20:48:25.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:25.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003cac99c} Jan 24 20:48:35.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:35.761: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003caca6c} Jan 24 20:48:38.464: INFO: RC rc: sending request to consume 325 millicores Jan 24 20:48:38.464: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 24 20:48:45.657: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:45.759: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dbad3c} Jan 24 20:48:55.658: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:55.760: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dbaf3c} Jan 24 20:48:55.863: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 24 20:48:55.965: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-24 20:38:50 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc002dbb184} Jan 24 20:48:55.965: INFO: Number of replicas was stable over 10m0s Jan 24 20:48:55.965: INFO: RC rc: consume 10 millicores in total Jan 24 20:48:55.965: INFO: RC rc: setting consumption to 10 millicores in total Jan 24 20:48:56.068: INFO: waiting for 1 replicas (current: 3) Jan 24 20:49:08.579: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:49:08.579: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:49:16.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:49:36.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:49:38.691: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:49:38.691: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:49:56.173: INFO: waiting for 1 replicas (current: 3) Jan 24 20:50:08.802: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:50:08.802: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:50:16.171: INFO: waiting for 1 replicas (current: 3) Jan 24 20:50:36.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:50:38.914: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:50:38.914: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:50:56.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:51:09.029: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:51:09.029: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:51:16.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:51:36.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:51:39.143: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:51:39.143: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:51:56.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:52:09.256: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:52:09.256: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:52:16.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:52:36.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:52:39.368: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:52:39.368: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:52:56.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:53:09.481: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:53:09.482: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:53:16.175: INFO: waiting for 1 replicas (current: 3) Jan 24 20:53:36.174: INFO: waiting for 1 replicas (current: 3) Jan 24 20:53:39.594: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:53:39.594: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:53:56.172: INFO: waiting for 1 replicas (current: 3) Jan 24 20:54:09.707: INFO: RC rc: sending request to consume 10 millicores Jan 24 20:54:09.707: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8025/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 24 20:54:16.172: INFO: waiting for 1 replicas (current: 2) Jan 24 20:54:36.172: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc �[38;5;243m01/24/23 20:54:36.279�[0m Jan 24 20:54:36.279: INFO: RC rc: stopping metric consumer Jan 24 20:54:36.279: INFO: RC rc: stopping CPU consumer Jan 24 20:54:36.279: INFO: RC rc: stopping mem consumer �[1mSTEP:�[0m deleting ReplicationController rc in namespace horizontal-pod-autoscaling-8025, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:54:46.279�[0m Jan 24 20:54:46.640: INFO: Deleting ReplicationController rc took: 107.002084ms Jan 24 20:54:46.741: INFO: Terminating ReplicationController rc pods took: 100.167171ms �[1mSTEP:�[0m deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-8025, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 20:54:48.869�[0m Jan 24 20:54:49.228: INFO: Deleting ReplicationController rc-ctrl took: 106.262513ms Jan 24 20:54:49.328: INFO: Terminating ReplicationController rc-ctrl pods took: 100.769001ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 20:54:50.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-8025" for this suite. �[38;5;243m01/24/23 20:54:50.969�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/predicates.go:699�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:54:51.085�[0m Jan 24 20:54:51.086: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m01/24/23 20:54:51.087�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:54:51.398�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:54:51.601�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 24 20:54:51.803: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 24 20:54:52.017: INFO: Waiting for terminating namespaces to be deleted... Jan 24 20:54:52.119: INFO: Logging pods the apiserver thinks is on node capz-conf-jzg2c before test Jan 24 20:54:52.229: INFO: calico-node-windows-77tct from calico-system started at 2023-01-24 19:20:29 +0000 UTC (2 container statuses recorded) Jan 24 20:54:52.229: INFO: Container calico-node-felix ready: true, restart count 1 Jan 24 20:54:52.229: INFO: Container calico-node-startup ready: true, restart count 0 Jan 24 20:54:52.229: INFO: containerd-logger-xt7tr from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.229: INFO: Container containerd-logger ready: true, restart count 0 Jan 24 20:54:52.229: INFO: csi-azuredisk-node-win-l79cl from kube-system started at 2023-01-24 19:21:00 +0000 UTC (3 container statuses recorded) Jan 24 20:54:52.229: INFO: Container azuredisk ready: true, restart count 0 Jan 24 20:54:52.229: INFO: Container liveness-probe ready: true, restart count 0 Jan 24 20:54:52.229: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 24 20:54:52.229: INFO: csi-proxy-xnqhl from kube-system started at 2023-01-24 19:21:00 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.229: INFO: Container csi-proxy ready: true, restart count 0 Jan 24 20:54:52.229: INFO: kube-proxy-windows-6szqk from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.229: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 20:54:52.229: INFO: Logging pods the apiserver thinks is on node capz-conf-s4kcn before test Jan 24 20:54:52.339: INFO: calico-node-windows-t9nl5 from calico-system started at 2023-01-24 19:20:29 +0000 UTC (2 container statuses recorded) Jan 24 20:54:52.339: INFO: Container calico-node-felix ready: true, restart count 1 Jan 24 20:54:52.339: INFO: Container calico-node-startup ready: true, restart count 0 Jan 24 20:54:52.339: INFO: containerd-logger-6ndvk from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.339: INFO: Container containerd-logger ready: true, restart count 0 Jan 24 20:54:52.339: INFO: csi-azuredisk-node-win-8mbvt from kube-system started at 2023-01-24 19:20:59 +0000 UTC (3 container statuses recorded) Jan 24 20:54:52.339: INFO: Container azuredisk ready: true, restart count 0 Jan 24 20:54:52.339: INFO: Container liveness-probe ready: true, restart count 0 Jan 24 20:54:52.339: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 24 20:54:52.339: INFO: csi-proxy-t452z from kube-system started at 2023-01-24 19:20:59 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.339: INFO: Container csi-proxy ready: true, restart count 0 Jan 24 20:54:52.339: INFO: kube-proxy-windows-9qxpl from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.339: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:699 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/24/23 20:54:52.339�[0m Jan 24 20:54:52.447: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-540" to be "running" Jan 24 20:54:52.553: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 105.644611ms Jan 24 20:54:54.661: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213757989s Jan 24 20:54:56.657: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2096737s Jan 24 20:54:58.657: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.209642472s Jan 24 20:54:58.657: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/24/23 20:54:58.76�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m01/24/23 20:54:58.976�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-100af512-6a72-4a6f-a876-d538f4461af9 95 �[38;5;243m01/24/23 20:54:59.091�[0m �[1mSTEP:�[0m Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled �[38;5;243m01/24/23 20:54:59.196�[0m Jan 24 20:54:59.302: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-540" to be "not pending" Jan 24 20:54:59.405: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 102.639387ms Jan 24 20:55:01.508: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205909887s Jan 24 20:55:03.511: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208449424s Jan 24 20:55:05.508: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 6.205862474s Jan 24 20:55:05.508: INFO: Pod "pod4" satisfied condition "not pending" �[1mSTEP:�[0m Trying to create another pod(pod5) with hostport 54322 but hostIP 10.1.0.5 on the node which pod4 resides and expect not scheduled �[38;5;243m01/24/23 20:55:05.508�[0m Jan 24 20:55:05.616: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-540" to be "not pending" Jan 24 20:55:05.721: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 104.952985ms Jan 24 20:55:07.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208088075s Jan 24 20:55:09.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207049643s Jan 24 20:55:11.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208209433s Jan 24 20:55:13.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208378226s Jan 24 20:55:15.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.20906035s Jan 24 20:55:17.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.207916852s Jan 24 20:55:19.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.207792498s Jan 24 20:55:21.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.207963598s Jan 24 20:55:23.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.209851852s Jan 24 20:55:25.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.209355448s Jan 24 20:55:27.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.2071942s Jan 24 20:55:29.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.207893379s Jan 24 20:55:31.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.208357835s Jan 24 20:55:33.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.207790178s Jan 24 20:55:35.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.207711667s Jan 24 20:55:37.828: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.211242898s Jan 24 20:55:39.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.209668603s Jan 24 20:55:41.832: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.215782698s Jan 24 20:55:43.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.208362443s Jan 24 20:55:45.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.20855372s Jan 24 20:55:47.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.208089593s Jan 24 20:55:49.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.209630923s Jan 24 20:55:51.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.208819704s Jan 24 20:55:53.829: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.212738013s Jan 24 20:55:55.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.208054626s Jan 24 20:55:57.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.209198482s Jan 24 20:55:59.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.208244091s Jan 24 20:56:01.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.208092609s Jan 24 20:56:03.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.209582676s Jan 24 20:56:05.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.208330771s Jan 24 20:56:07.828: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.211171515s Jan 24 20:56:09.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.208512171s Jan 24 20:56:11.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.207488152s Jan 24 20:56:13.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.208239055s Jan 24 20:56:15.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.207868617s Jan 24 20:56:17.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.207556917s Jan 24 20:56:19.828: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.211929934s Jan 24 20:56:21.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.207789651s Jan 24 20:56:23.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.210102021s Jan 24 20:56:25.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.208783014s Jan 24 20:56:27.829: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.212416418s Jan 24 20:56:29.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.207723703s Jan 24 20:56:31.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.208504417s Jan 24 20:56:33.829: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.212889861s Jan 24 20:56:35.835: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.218342332s Jan 24 20:56:37.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.20731873s Jan 24 20:56:39.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.208995399s Jan 24 20:56:41.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.20800997s Jan 24 20:56:43.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.21035981s Jan 24 20:56:45.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.208670887s Jan 24 20:56:47.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.208779643s Jan 24 20:56:49.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.209356645s Jan 24 20:56:51.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.208292084s Jan 24 20:56:53.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.208951516s Jan 24 20:56:55.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.210778153s Jan 24 20:56:57.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.207827701s Jan 24 20:56:59.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.209270799s Jan 24 20:57:01.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.207967153s Jan 24 20:57:03.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.209740858s Jan 24 20:57:05.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.210009219s Jan 24 20:57:07.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.20834656s Jan 24 20:57:09.830: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.213692305s Jan 24 20:57:11.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.209384375s Jan 24 20:57:13.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.20877265s Jan 24 20:57:15.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.208333839s Jan 24 20:57:17.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.210072097s Jan 24 20:57:19.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.207854406s Jan 24 20:57:21.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.208842921s Jan 24 20:57:23.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.209356994s Jan 24 20:57:25.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.20900953s Jan 24 20:57:27.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.208431581s Jan 24 20:57:29.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.208207884s Jan 24 20:57:31.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.208027328s Jan 24 20:57:33.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.210259419s Jan 24 20:57:35.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.208272621s Jan 24 20:57:37.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.20988958s Jan 24 20:57:39.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.20772785s Jan 24 20:57:41.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.210371682s Jan 24 20:57:43.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.207763356s Jan 24 20:57:45.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.207743564s Jan 24 20:57:47.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.208286942s Jan 24 20:57:49.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.208182644s Jan 24 20:57:51.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.207892024s Jan 24 20:57:53.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.210034489s Jan 24 20:57:55.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.209518462s Jan 24 20:57:57.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.208469305s Jan 24 20:57:59.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.209324966s Jan 24 20:58:01.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.209220804s Jan 24 20:58:03.831: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.214974671s Jan 24 20:58:05.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.20756225s Jan 24 20:58:07.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.208840447s Jan 24 20:58:09.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.210014546s Jan 24 20:58:11.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.20838368s Jan 24 20:58:13.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.20815539s Jan 24 20:58:15.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.208295529s Jan 24 20:58:17.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.209561238s Jan 24 20:58:19.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.207844371s Jan 24 20:58:21.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.208169665s Jan 24 20:58:23.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.208623985s Jan 24 20:58:25.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.209254237s Jan 24 20:58:27.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.20850038s Jan 24 20:58:29.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.209522547s Jan 24 20:58:31.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.207627468s Jan 24 20:58:33.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.208872419s Jan 24 20:58:35.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.208036444s Jan 24 20:58:37.833: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.216500472s Jan 24 20:58:39.828: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.211367983s Jan 24 20:58:41.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.207650083s Jan 24 20:58:43.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.208142009s Jan 24 20:58:45.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.208487367s Jan 24 20:58:47.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.209481797s Jan 24 20:58:49.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.208621068s Jan 24 20:58:51.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.207905321s Jan 24 20:58:53.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.208132765s Jan 24 20:58:55.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.209827608s Jan 24 20:58:57.828: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.211371669s Jan 24 20:58:59.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.207562327s Jan 24 20:59:01.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.209912397s Jan 24 20:59:03.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.208034685s Jan 24 20:59:05.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.20819798s Jan 24 20:59:07.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.208800438s Jan 24 20:59:09.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.208879302s Jan 24 20:59:11.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.207825264s Jan 24 20:59:13.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.210903105s Jan 24 20:59:15.829: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.212111064s Jan 24 20:59:17.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.207837623s Jan 24 20:59:19.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.209339957s Jan 24 20:59:21.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.208582746s Jan 24 20:59:23.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.20928947s Jan 24 20:59:25.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.209165853s Jan 24 20:59:27.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.207771751s Jan 24 20:59:29.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.207871983s Jan 24 20:59:31.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.207771522s Jan 24 20:59:33.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.210351524s Jan 24 20:59:35.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.207921705s Jan 24 20:59:37.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.209033141s Jan 24 20:59:39.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.209036108s Jan 24 20:59:41.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.207743492s Jan 24 20:59:43.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.208868795s Jan 24 20:59:45.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.207921657s Jan 24 20:59:47.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.207822862s Jan 24 20:59:49.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.209292906s Jan 24 20:59:51.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.207320541s Jan 24 20:59:53.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.207375492s Jan 24 20:59:55.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.207914734s Jan 24 20:59:57.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.208504704s Jan 24 20:59:59.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.208016457s Jan 24 21:00:01.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.210322555s Jan 24 21:00:03.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.207859431s Jan 24 21:00:05.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.209826162s Jan 24 21:00:05.931: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.314598416s �[1mSTEP:�[0m removing the label kubernetes.io/e2e-100af512-6a72-4a6f-a876-d538f4461af9 off the node capz-conf-s4kcn �[38;5;243m01/24/23 21:00:05.931�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-100af512-6a72-4a6f-a876-d538f4461af9 �[38;5;243m01/24/23 21:00:06.152�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Jan 24 21:00:06.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-pred-540" for this suite. �[38;5;243m01/24/23 21:00:06.362�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","completed":26,"skipped":2067,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [315.384 seconds]�[0m [sig-scheduling] SchedulerPredicates [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] �[38;5;243mtest/e2e/scheduling/predicates.go:699�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 20:54:51.085�[0m Jan 24 20:54:51.086: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m01/24/23 20:54:51.087�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 20:54:51.398�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 20:54:51.601�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 24 20:54:51.803: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 24 20:54:52.017: INFO: Waiting for terminating namespaces to be deleted... Jan 24 20:54:52.119: INFO: Logging pods the apiserver thinks is on node capz-conf-jzg2c before test Jan 24 20:54:52.229: INFO: calico-node-windows-77tct from calico-system started at 2023-01-24 19:20:29 +0000 UTC (2 container statuses recorded) Jan 24 20:54:52.229: INFO: Container calico-node-felix ready: true, restart count 1 Jan 24 20:54:52.229: INFO: Container calico-node-startup ready: true, restart count 0 Jan 24 20:54:52.229: INFO: containerd-logger-xt7tr from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.229: INFO: Container containerd-logger ready: true, restart count 0 Jan 24 20:54:52.229: INFO: csi-azuredisk-node-win-l79cl from kube-system started at 2023-01-24 19:21:00 +0000 UTC (3 container statuses recorded) Jan 24 20:54:52.229: INFO: Container azuredisk ready: true, restart count 0 Jan 24 20:54:52.229: INFO: Container liveness-probe ready: true, restart count 0 Jan 24 20:54:52.229: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 24 20:54:52.229: INFO: csi-proxy-xnqhl from kube-system started at 2023-01-24 19:21:00 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.229: INFO: Container csi-proxy ready: true, restart count 0 Jan 24 20:54:52.229: INFO: kube-proxy-windows-6szqk from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.229: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 20:54:52.229: INFO: Logging pods the apiserver thinks is on node capz-conf-s4kcn before test Jan 24 20:54:52.339: INFO: calico-node-windows-t9nl5 from calico-system started at 2023-01-24 19:20:29 +0000 UTC (2 container statuses recorded) Jan 24 20:54:52.339: INFO: Container calico-node-felix ready: true, restart count 1 Jan 24 20:54:52.339: INFO: Container calico-node-startup ready: true, restart count 0 Jan 24 20:54:52.339: INFO: containerd-logger-6ndvk from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.339: INFO: Container containerd-logger ready: true, restart count 0 Jan 24 20:54:52.339: INFO: csi-azuredisk-node-win-8mbvt from kube-system started at 2023-01-24 19:20:59 +0000 UTC (3 container statuses recorded) Jan 24 20:54:52.339: INFO: Container azuredisk ready: true, restart count 0 Jan 24 20:54:52.339: INFO: Container liveness-probe ready: true, restart count 0 Jan 24 20:54:52.339: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 24 20:54:52.339: INFO: csi-proxy-t452z from kube-system started at 2023-01-24 19:20:59 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.339: INFO: Container csi-proxy ready: true, restart count 0 Jan 24 20:54:52.339: INFO: kube-proxy-windows-9qxpl from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 20:54:52.339: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:699 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/24/23 20:54:52.339�[0m Jan 24 20:54:52.447: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-540" to be "running" Jan 24 20:54:52.553: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 105.644611ms Jan 24 20:54:54.661: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213757989s Jan 24 20:54:56.657: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2096737s Jan 24 20:54:58.657: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.209642472s Jan 24 20:54:58.657: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/24/23 20:54:58.76�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m01/24/23 20:54:58.976�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-100af512-6a72-4a6f-a876-d538f4461af9 95 �[38;5;243m01/24/23 20:54:59.091�[0m �[1mSTEP:�[0m Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled �[38;5;243m01/24/23 20:54:59.196�[0m Jan 24 20:54:59.302: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-540" to be "not pending" Jan 24 20:54:59.405: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 102.639387ms Jan 24 20:55:01.508: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205909887s Jan 24 20:55:03.511: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208449424s Jan 24 20:55:05.508: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 6.205862474s Jan 24 20:55:05.508: INFO: Pod "pod4" satisfied condition "not pending" �[1mSTEP:�[0m Trying to create another pod(pod5) with hostport 54322 but hostIP 10.1.0.5 on the node which pod4 resides and expect not scheduled �[38;5;243m01/24/23 20:55:05.508�[0m Jan 24 20:55:05.616: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-540" to be "not pending" Jan 24 20:55:05.721: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 104.952985ms Jan 24 20:55:07.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208088075s Jan 24 20:55:09.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207049643s Jan 24 20:55:11.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208209433s Jan 24 20:55:13.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208378226s Jan 24 20:55:15.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.20906035s Jan 24 20:55:17.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.207916852s Jan 24 20:55:19.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.207792498s Jan 24 20:55:21.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.207963598s Jan 24 20:55:23.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.209851852s Jan 24 20:55:25.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.209355448s Jan 24 20:55:27.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.2071942s Jan 24 20:55:29.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.207893379s Jan 24 20:55:31.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.208357835s Jan 24 20:55:33.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.207790178s Jan 24 20:55:35.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.207711667s Jan 24 20:55:37.828: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.211242898s Jan 24 20:55:39.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.209668603s Jan 24 20:55:41.832: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.215782698s Jan 24 20:55:43.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.208362443s Jan 24 20:55:45.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.20855372s Jan 24 20:55:47.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.208089593s Jan 24 20:55:49.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.209630923s Jan 24 20:55:51.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.208819704s Jan 24 20:55:53.829: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.212738013s Jan 24 20:55:55.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.208054626s Jan 24 20:55:57.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.209198482s Jan 24 20:55:59.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.208244091s Jan 24 20:56:01.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.208092609s Jan 24 20:56:03.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.209582676s Jan 24 20:56:05.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.208330771s Jan 24 20:56:07.828: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.211171515s Jan 24 20:56:09.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.208512171s Jan 24 20:56:11.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.207488152s Jan 24 20:56:13.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.208239055s Jan 24 20:56:15.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.207868617s Jan 24 20:56:17.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.207556917s Jan 24 20:56:19.828: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.211929934s Jan 24 20:56:21.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.207789651s Jan 24 20:56:23.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.210102021s Jan 24 20:56:25.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.208783014s Jan 24 20:56:27.829: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.212416418s Jan 24 20:56:29.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.207723703s Jan 24 20:56:31.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.208504417s Jan 24 20:56:33.829: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.212889861s Jan 24 20:56:35.835: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.218342332s Jan 24 20:56:37.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.20731873s Jan 24 20:56:39.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.208995399s Jan 24 20:56:41.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.20800997s Jan 24 20:56:43.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.21035981s Jan 24 20:56:45.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.208670887s Jan 24 20:56:47.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.208779643s Jan 24 20:56:49.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.209356645s Jan 24 20:56:51.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.208292084s Jan 24 20:56:53.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.208951516s Jan 24 20:56:55.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.210778153s Jan 24 20:56:57.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.207827701s Jan 24 20:56:59.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.209270799s Jan 24 20:57:01.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.207967153s Jan 24 20:57:03.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.209740858s Jan 24 20:57:05.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.210009219s Jan 24 20:57:07.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.20834656s Jan 24 20:57:09.830: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.213692305s Jan 24 20:57:11.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.209384375s Jan 24 20:57:13.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.20877265s Jan 24 20:57:15.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.208333839s Jan 24 20:57:17.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.210072097s Jan 24 20:57:19.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.207854406s Jan 24 20:57:21.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.208842921s Jan 24 20:57:23.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.209356994s Jan 24 20:57:25.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.20900953s Jan 24 20:57:27.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.208431581s Jan 24 20:57:29.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.208207884s Jan 24 20:57:31.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.208027328s Jan 24 20:57:33.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.210259419s Jan 24 20:57:35.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.208272621s Jan 24 20:57:37.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.20988958s Jan 24 20:57:39.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.20772785s Jan 24 20:57:41.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.210371682s Jan 24 20:57:43.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.207763356s Jan 24 20:57:45.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.207743564s Jan 24 20:57:47.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.208286942s Jan 24 20:57:49.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.208182644s Jan 24 20:57:51.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.207892024s Jan 24 20:57:53.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.210034489s Jan 24 20:57:55.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.209518462s Jan 24 20:57:57.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.208469305s Jan 24 20:57:59.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.209324966s Jan 24 20:58:01.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.209220804s Jan 24 20:58:03.831: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.214974671s Jan 24 20:58:05.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.20756225s Jan 24 20:58:07.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.208840447s Jan 24 20:58:09.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.210014546s Jan 24 20:58:11.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.20838368s Jan 24 20:58:13.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.20815539s Jan 24 20:58:15.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.208295529s Jan 24 20:58:17.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.209561238s Jan 24 20:58:19.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.207844371s Jan 24 20:58:21.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.208169665s Jan 24 20:58:23.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.208623985s Jan 24 20:58:25.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.209254237s Jan 24 20:58:27.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.20850038s Jan 24 20:58:29.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.209522547s Jan 24 20:58:31.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.207627468s Jan 24 20:58:33.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.208872419s Jan 24 20:58:35.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.208036444s Jan 24 20:58:37.833: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.216500472s Jan 24 20:58:39.828: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.211367983s Jan 24 20:58:41.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.207650083s Jan 24 20:58:43.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.208142009s Jan 24 20:58:45.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.208487367s Jan 24 20:58:47.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.209481797s Jan 24 20:58:49.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.208621068s Jan 24 20:58:51.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.207905321s Jan 24 20:58:53.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.208132765s Jan 24 20:58:55.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.209827608s Jan 24 20:58:57.828: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.211371669s Jan 24 20:58:59.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.207562327s Jan 24 20:59:01.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.209912397s Jan 24 20:59:03.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.208034685s Jan 24 20:59:05.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.20819798s Jan 24 20:59:07.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.208800438s Jan 24 20:59:09.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.208879302s Jan 24 20:59:11.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.207825264s Jan 24 20:59:13.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.210903105s Jan 24 20:59:15.829: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.212111064s Jan 24 20:59:17.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.207837623s Jan 24 20:59:19.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.209339957s Jan 24 20:59:21.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.208582746s Jan 24 20:59:23.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.20928947s Jan 24 20:59:25.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.209165853s Jan 24 20:59:27.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.207771751s Jan 24 20:59:29.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.207871983s Jan 24 20:59:31.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.207771522s Jan 24 20:59:33.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.210351524s Jan 24 20:59:35.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.207921705s Jan 24 20:59:37.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.209033141s Jan 24 20:59:39.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.209036108s Jan 24 20:59:41.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.207743492s Jan 24 20:59:43.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.208868795s Jan 24 20:59:45.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.207921657s Jan 24 20:59:47.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.207822862s Jan 24 20:59:49.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.209292906s Jan 24 20:59:51.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.207320541s Jan 24 20:59:53.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.207375492s Jan 24 20:59:55.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.207914734s Jan 24 20:59:57.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.208504704s Jan 24 20:59:59.825: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.208016457s Jan 24 21:00:01.827: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.210322555s Jan 24 21:00:03.824: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.207859431s Jan 24 21:00:05.826: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.209826162s Jan 24 21:00:05.931: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.314598416s �[1mSTEP:�[0m removing the label kubernetes.io/e2e-100af512-6a72-4a6f-a876-d538f4461af9 off the node capz-conf-s4kcn �[38;5;243m01/24/23 21:00:05.931�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-100af512-6a72-4a6f-a876-d538f4461af9 �[38;5;243m01/24/23 21:00:06.152�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Jan 24 21:00:06.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-pred-540" for this suite. �[38;5;243m01/24/23 21:00:06.362�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] ControllerRevision [Serial]�[0m �[1mshould manage the lifecycle of a ControllerRevision [Conformance]�[0m �[38;5;243mtest/e2e/apps/controller_revision.go:124�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:00:06.473�[0m Jan 24 21:00:06.473: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename controllerrevisions �[38;5;243m01/24/23 21:00:06.475�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:00:06.785�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:00:06.987�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 �[1mSTEP:�[0m Creating DaemonSet "e2e-vffc6-daemon-set" �[38;5;243m01/24/23 21:00:07.611�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/24/23 21:00:07.718�[0m Jan 24 21:00:07.829: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:07.933: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 0 Jan 24 21:00:07.933: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:00:09.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:09.144: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 0 Jan 24 21:00:09.144: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:00:10.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:10.143: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 0 Jan 24 21:00:10.143: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:00:11.042: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:11.145: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 0 Jan 24 21:00:11.145: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:00:12.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:12.144: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 1 Jan 24 21:00:12.144: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 21:00:13.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:13.143: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 1 Jan 24 21:00:13.143: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 21:00:14.041: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:14.145: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 1 Jan 24 21:00:14.145: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 21:00:15.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:15.144: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 1 Jan 24 21:00:15.144: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 21:00:16.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:16.144: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 2 Jan 24 21:00:16.144: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-vffc6-daemon-set �[1mSTEP:�[0m Confirm DaemonSet "e2e-vffc6-daemon-set" successfully created with "daemonset-name=e2e-vffc6-daemon-set" label �[38;5;243m01/24/23 21:00:16.249�[0m �[1mSTEP:�[0m Listing all ControllerRevisions with label "daemonset-name=e2e-vffc6-daemon-set" �[38;5;243m01/24/23 21:00:16.455�[0m Jan 24 21:00:16.558: INFO: Located ControllerRevision: "e2e-vffc6-daemon-set-769fcdcbcb" �[1mSTEP:�[0m Patching ControllerRevision "e2e-vffc6-daemon-set-769fcdcbcb" �[38;5;243m01/24/23 21:00:16.66�[0m Jan 24 21:00:16.770: INFO: e2e-vffc6-daemon-set-769fcdcbcb has been patched �[1mSTEP:�[0m Create a new ControllerRevision �[38;5;243m01/24/23 21:00:16.77�[0m Jan 24 21:00:16.876: INFO: Created ControllerRevision: e2e-vffc6-daemon-set-74c468b958 �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m01/24/23 21:00:16.876�[0m Jan 24 21:00:16.876: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 24 21:00:16.981: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Deleting ControllerRevision "e2e-vffc6-daemon-set-769fcdcbcb" �[38;5;243m01/24/23 21:00:16.981�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m01/24/23 21:00:17.087�[0m Jan 24 21:00:17.087: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 24 21:00:17.189: INFO: Found 1 ControllerRevisions �[1mSTEP:�[0m Updating ControllerRevision "e2e-vffc6-daemon-set-74c468b958" �[38;5;243m01/24/23 21:00:17.292�[0m Jan 24 21:00:17.502: INFO: e2e-vffc6-daemon-set-74c468b958 has been updated �[1mSTEP:�[0m Generate another ControllerRevision by patching the Daemonset �[38;5;243m01/24/23 21:00:17.502�[0m W0124 21:00:17.612485 14 warnings.go:70] unknown field "updateStrategy" �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m01/24/23 21:00:17.612�[0m Jan 24 21:00:17.612: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 24 21:00:17.714: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-vffc6-daemon-set-74c468b958=updated" �[38;5;243m01/24/23 21:00:17.714�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m01/24/23 21:00:17.823�[0m Jan 24 21:00:17.823: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 24 21:00:17.926: INFO: Found 1 ControllerRevisions Jan 24 21:00:18.028: INFO: ControllerRevision "e2e-vffc6-daemon-set-95b849c59" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 �[1mSTEP:�[0m Deleting DaemonSet "e2e-vffc6-daemon-set" �[38;5;243m01/24/23 21:00:18.131�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions e2e-vffc6-daemon-set in namespace controllerrevisions-4637, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:00:18.131�[0m Jan 24 21:00:18.492: INFO: Deleting DaemonSet.extensions e2e-vffc6-daemon-set took: 106.997535ms Jan 24 21:00:18.592: INFO: Terminating DaemonSet.extensions e2e-vffc6-daemon-set pods took: 100.358939ms Jan 24 21:00:23.095: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 0 Jan 24 21:00:23.095: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-vffc6-daemon-set Jan 24 21:00:23.197: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"22376"},"items":null} Jan 24 21:00:23.299: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"22376"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:187 Jan 24 21:00:23.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "controllerrevisions-4637" for this suite. �[38;5;243m01/24/23 21:00:23.719�[0m {"msg":"PASSED [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]","completed":27,"skipped":2084,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [17.355 seconds]�[0m [sig-apps] ControllerRevision [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should manage the lifecycle of a ControllerRevision [Conformance] �[38;5;243mtest/e2e/apps/controller_revision.go:124�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:00:06.473�[0m Jan 24 21:00:06.473: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename controllerrevisions �[38;5;243m01/24/23 21:00:06.475�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:00:06.785�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:00:06.987�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 �[1mSTEP:�[0m Creating DaemonSet "e2e-vffc6-daemon-set" �[38;5;243m01/24/23 21:00:07.611�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/24/23 21:00:07.718�[0m Jan 24 21:00:07.829: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:07.933: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 0 Jan 24 21:00:07.933: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:00:09.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:09.144: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 0 Jan 24 21:00:09.144: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:00:10.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:10.143: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 0 Jan 24 21:00:10.143: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:00:11.042: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:11.145: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 0 Jan 24 21:00:11.145: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:00:12.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:12.144: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 1 Jan 24 21:00:12.144: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 21:00:13.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:13.143: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 1 Jan 24 21:00:13.143: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 21:00:14.041: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:14.145: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 1 Jan 24 21:00:14.145: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 21:00:15.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:15.144: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 1 Jan 24 21:00:15.144: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 21:00:16.040: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:00:16.144: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 2 Jan 24 21:00:16.144: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-vffc6-daemon-set �[1mSTEP:�[0m Confirm DaemonSet "e2e-vffc6-daemon-set" successfully created with "daemonset-name=e2e-vffc6-daemon-set" label �[38;5;243m01/24/23 21:00:16.249�[0m �[1mSTEP:�[0m Listing all ControllerRevisions with label "daemonset-name=e2e-vffc6-daemon-set" �[38;5;243m01/24/23 21:00:16.455�[0m Jan 24 21:00:16.558: INFO: Located ControllerRevision: "e2e-vffc6-daemon-set-769fcdcbcb" �[1mSTEP:�[0m Patching ControllerRevision "e2e-vffc6-daemon-set-769fcdcbcb" �[38;5;243m01/24/23 21:00:16.66�[0m Jan 24 21:00:16.770: INFO: e2e-vffc6-daemon-set-769fcdcbcb has been patched �[1mSTEP:�[0m Create a new ControllerRevision �[38;5;243m01/24/23 21:00:16.77�[0m Jan 24 21:00:16.876: INFO: Created ControllerRevision: e2e-vffc6-daemon-set-74c468b958 �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m01/24/23 21:00:16.876�[0m Jan 24 21:00:16.876: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 24 21:00:16.981: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Deleting ControllerRevision "e2e-vffc6-daemon-set-769fcdcbcb" �[38;5;243m01/24/23 21:00:16.981�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m01/24/23 21:00:17.087�[0m Jan 24 21:00:17.087: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 24 21:00:17.189: INFO: Found 1 ControllerRevisions �[1mSTEP:�[0m Updating ControllerRevision "e2e-vffc6-daemon-set-74c468b958" �[38;5;243m01/24/23 21:00:17.292�[0m Jan 24 21:00:17.502: INFO: e2e-vffc6-daemon-set-74c468b958 has been updated �[1mSTEP:�[0m Generate another ControllerRevision by patching the Daemonset �[38;5;243m01/24/23 21:00:17.502�[0m W0124 21:00:17.612485 14 warnings.go:70] unknown field "updateStrategy" �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m01/24/23 21:00:17.612�[0m Jan 24 21:00:17.612: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 24 21:00:17.714: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-vffc6-daemon-set-74c468b958=updated" �[38;5;243m01/24/23 21:00:17.714�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m01/24/23 21:00:17.823�[0m Jan 24 21:00:17.823: INFO: Requesting list of ControllerRevisions to confirm quantity Jan 24 21:00:17.926: INFO: Found 1 ControllerRevisions Jan 24 21:00:18.028: INFO: ControllerRevision "e2e-vffc6-daemon-set-95b849c59" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 �[1mSTEP:�[0m Deleting DaemonSet "e2e-vffc6-daemon-set" �[38;5;243m01/24/23 21:00:18.131�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions e2e-vffc6-daemon-set in namespace controllerrevisions-4637, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:00:18.131�[0m Jan 24 21:00:18.492: INFO: Deleting DaemonSet.extensions e2e-vffc6-daemon-set took: 106.997535ms Jan 24 21:00:18.592: INFO: Terminating DaemonSet.extensions e2e-vffc6-daemon-set pods took: 100.358939ms Jan 24 21:00:23.095: INFO: Number of nodes with available pods controlled by daemonset e2e-vffc6-daemon-set: 0 Jan 24 21:00:23.095: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-vffc6-daemon-set Jan 24 21:00:23.197: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"22376"},"items":null} Jan 24 21:00:23.299: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"22376"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:187 Jan 24 21:00:23.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "controllerrevisions-4637" for this suite. �[38;5;243m01/24/23 21:00:23.719�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case)�[0m �[1mShould not scale up on a busy sidecar with an idle application�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:103�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:00:23.841�[0m Jan 24 21:00:23.841: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:00:23.842�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:00:24.157�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:00:24.361�[0m [It] Should not scale up on a busy sidecar with an idle application test/e2e/autoscaling/horizontal_pod_autoscaling.go:103 �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m01/24/23 21:00:24.565�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-633 �[38;5;243m01/24/23 21:00:24.684�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-633 �[38;5;243m01/24/23 21:00:24.684�[0m I0124 21:00:24.795326 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-633, replica count: 1 I0124 21:00:34.946051 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:00:34.946�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-633 �[38;5;243m01/24/23 21:00:35.065�[0m I0124 21:00:35.171600 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-633, replica count: 1 I0124 21:00:45.323840 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:00:50.327: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 �[1mSTEP:�[0m Running consuming RC sidecar rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m01/24/23 21:00:50.434�[0m �[1mSTEP:�[0m Running controller for sidecar �[38;5;243m01/24/23 21:00:50.553�[0m �[1mSTEP:�[0m creating replication controller rs-sidecar-ctrl in namespace horizontal-pod-autoscaling-633 �[38;5;243m01/24/23 21:00:50.67�[0m I0124 21:00:50.776497 14 runners.go:193] Created replication controller with name: rs-sidecar-ctrl, namespace: horizontal-pod-autoscaling-633, replica count: 1 I0124 21:01:00.928270 14 runners.go:193] rs-sidecar-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:01:05.929: INFO: Waiting for amount of service:rs-sidecar-ctrl endpoints to be 1 Jan 24 21:01:06.031: INFO: RC rs: consume 250 millicores in total Jan 24 21:01:06.031: INFO: RC rs: setting consumption to 250 millicores in total Jan 24 21:01:06.031: INFO: RC rs: consume 0 MB in total Jan 24 21:01:06.031: INFO: RC rs: consume custom metric 0 in total Jan 24 21:01:06.031: INFO: RC rs: disabling consumption of custom metric QPS Jan 24 21:01:06.031: INFO: RC rs: disabling mem consumption Jan 24 21:01:06.243: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:06.346: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:01:16.450: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:16.552: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:01:26.450: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:26.552: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:01:36.032: INFO: RC rs: sending request to consume 250 millicores Jan 24 21:01:36.032: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-633/services/rs-sidecar-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 24 21:01:36.449: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:36.552: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:01:46.450: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:46.553: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:01:56.451: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:56.553: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:02:06.189: INFO: RC rs: sending request to consume 250 millicores Jan 24 21:02:06.189: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-633/services/rs-sidecar-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 24 21:02:06.450: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:02:06.552: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:02:06.654: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:02:06.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:02:06.756: INFO: Number of replicas was stable over 1m0s �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m01/24/23 21:02:06.864�[0m Jan 24 21:02:06.864: INFO: RC rs: stopping metric consumer Jan 24 21:02:06.864: INFO: RC rs: stopping CPU consumer Jan 24 21:02:06.864: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-633, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:02:16.864�[0m Jan 24 21:02:17.226: INFO: Deleting ReplicaSet.apps rs took: 108.204096ms Jan 24 21:02:17.327: INFO: Terminating ReplicaSet.apps rs pods took: 100.643179ms �[1mSTEP:�[0m deleting ReplicationController rs-sidecar-ctrl in namespace horizontal-pod-autoscaling-633, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:02:20.065�[0m Jan 24 21:02:20.426: INFO: Deleting ReplicationController rs-sidecar-ctrl took: 107.443628ms Jan 24 21:02:20.527: INFO: Terminating ReplicationController rs-sidecar-ctrl pods took: 100.395632ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 21:02:22.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-633" for this suite. �[38;5;243m01/24/23 21:02:22.638�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application","completed":28,"skipped":2262,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [118.904 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:96�[0m Should not scale up on a busy sidecar with an idle application �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:103�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:00:23.841�[0m Jan 24 21:00:23.841: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:00:23.842�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:00:24.157�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:00:24.361�[0m [It] Should not scale up on a busy sidecar with an idle application test/e2e/autoscaling/horizontal_pod_autoscaling.go:103 �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m01/24/23 21:00:24.565�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-633 �[38;5;243m01/24/23 21:00:24.684�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-633 �[38;5;243m01/24/23 21:00:24.684�[0m I0124 21:00:24.795326 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-633, replica count: 1 I0124 21:00:34.946051 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:00:34.946�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-633 �[38;5;243m01/24/23 21:00:35.065�[0m I0124 21:00:35.171600 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-633, replica count: 1 I0124 21:00:45.323840 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:00:50.327: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 �[1mSTEP:�[0m Running consuming RC sidecar rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m01/24/23 21:00:50.434�[0m �[1mSTEP:�[0m Running controller for sidecar �[38;5;243m01/24/23 21:00:50.553�[0m �[1mSTEP:�[0m creating replication controller rs-sidecar-ctrl in namespace horizontal-pod-autoscaling-633 �[38;5;243m01/24/23 21:00:50.67�[0m I0124 21:00:50.776497 14 runners.go:193] Created replication controller with name: rs-sidecar-ctrl, namespace: horizontal-pod-autoscaling-633, replica count: 1 I0124 21:01:00.928270 14 runners.go:193] rs-sidecar-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:01:05.929: INFO: Waiting for amount of service:rs-sidecar-ctrl endpoints to be 1 Jan 24 21:01:06.031: INFO: RC rs: consume 250 millicores in total Jan 24 21:01:06.031: INFO: RC rs: setting consumption to 250 millicores in total Jan 24 21:01:06.031: INFO: RC rs: consume 0 MB in total Jan 24 21:01:06.031: INFO: RC rs: consume custom metric 0 in total Jan 24 21:01:06.031: INFO: RC rs: disabling consumption of custom metric QPS Jan 24 21:01:06.031: INFO: RC rs: disabling mem consumption Jan 24 21:01:06.243: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:06.346: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:01:16.450: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:16.552: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:01:26.450: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:26.552: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:01:36.032: INFO: RC rs: sending request to consume 250 millicores Jan 24 21:01:36.032: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-633/services/rs-sidecar-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 24 21:01:36.449: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:36.552: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:01:46.450: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:46.553: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:01:56.451: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:01:56.553: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:02:06.189: INFO: RC rs: sending request to consume 250 millicores Jan 24 21:02:06.189: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-633/services/rs-sidecar-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 24 21:02:06.450: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:02:06.552: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:02:06.654: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 24 21:02:06.756: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 24 21:02:06.756: INFO: Number of replicas was stable over 1m0s �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m01/24/23 21:02:06.864�[0m Jan 24 21:02:06.864: INFO: RC rs: stopping metric consumer Jan 24 21:02:06.864: INFO: RC rs: stopping CPU consumer Jan 24 21:02:06.864: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-633, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:02:16.864�[0m Jan 24 21:02:17.226: INFO: Deleting ReplicaSet.apps rs took: 108.204096ms Jan 24 21:02:17.327: INFO: Terminating ReplicaSet.apps rs pods took: 100.643179ms �[1mSTEP:�[0m deleting ReplicationController rs-sidecar-ctrl in namespace horizontal-pod-autoscaling-633, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:02:20.065�[0m Jan 24 21:02:20.426: INFO: Deleting ReplicationController rs-sidecar-ctrl took: 107.443628ms Jan 24 21:02:20.527: INFO: Terminating ReplicationController rs-sidecar-ctrl pods took: 100.395632ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 21:02:22.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-633" for this suite. �[38;5;243m01/24/23 21:02:22.638�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete jobs and pods created by cronjob�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:1145�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:02:22.749�[0m Jan 24 21:02:22.749: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/24/23 21:02:22.75�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:02:23.063�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:02:23.271�[0m [It] should delete jobs and pods created by cronjob test/e2e/apimachinery/garbage_collector.go:1145 �[1mSTEP:�[0m Create the cronjob �[38;5;243m01/24/23 21:02:23.474�[0m �[1mSTEP:�[0m Wait for the CronJob to create new Job �[38;5;243m01/24/23 21:02:23.582�[0m �[1mSTEP:�[0m Delete the cronjob �[38;5;243m01/24/23 21:03:00.286�[0m �[1mSTEP:�[0m Verify if cronjob does not leave jobs nor pods behind �[38;5;243m01/24/23 21:03:00.392�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/24/23 21:03:00.7�[0m Jan 24 21:03:01.023: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5" in namespace "kube-system" to be "running and ready" Jan 24 21:03:01.127: INFO: Pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5": Phase="Running", Reason="", readiness=true. Elapsed: 103.580312ms Jan 24 21:03:01.127: INFO: The phase of Pod kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5 is Running (Ready = true) Jan 24 21:03:01.127: INFO: Pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5" satisfied condition "running and ready" Jan 24 21:03:01.973: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 24 21:03:01.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-8477" for this suite. �[38;5;243m01/24/23 21:03:02.08�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","completed":29,"skipped":2286,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [39.438 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should delete jobs and pods created by cronjob �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:1145�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:02:22.749�[0m Jan 24 21:02:22.749: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/24/23 21:02:22.75�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:02:23.063�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:02:23.271�[0m [It] should delete jobs and pods created by cronjob test/e2e/apimachinery/garbage_collector.go:1145 �[1mSTEP:�[0m Create the cronjob �[38;5;243m01/24/23 21:02:23.474�[0m �[1mSTEP:�[0m Wait for the CronJob to create new Job �[38;5;243m01/24/23 21:02:23.582�[0m �[1mSTEP:�[0m Delete the cronjob �[38;5;243m01/24/23 21:03:00.286�[0m �[1mSTEP:�[0m Verify if cronjob does not leave jobs nor pods behind �[38;5;243m01/24/23 21:03:00.392�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/24/23 21:03:00.7�[0m Jan 24 21:03:01.023: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5" in namespace "kube-system" to be "running and ready" Jan 24 21:03:01.127: INFO: Pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5": Phase="Running", Reason="", readiness=true. Elapsed: 103.580312ms Jan 24 21:03:01.127: INFO: The phase of Pod kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5 is Running (Ready = true) Jan 24 21:03:01.127: INFO: Pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5" satisfied condition "running and ready" Jan 24 21:03:01.973: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 24 21:03:01.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-8477" for this suite. �[38;5;243m01/24/23 21:03:02.08�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[38;5;243mGMSA support�[0m �[1mworks end to end�[0m �[38;5;243mtest/e2e/windows/gmsa_full.go:97�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:03:02.188�[0m Jan 24 21:03:02.188: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-full-test-windows �[38;5;243m01/24/23 21:03:02.189�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:03:02.499�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:03:02.701�[0m [It] works end to end test/e2e/windows/gmsa_full.go:97 �[1mSTEP:�[0m finding the worker node that fulfills this test's assumptions �[38;5;243m01/24/23 21:03:02.906�[0m Jan 24 21:03:03.009: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 24 21:03:03.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gmsa-full-test-windows-2398" for this suite. �[38;5;243m01/24/23 21:03:03.124�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS [SKIPPED] [1.062 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m GMSA support �[38;5;243mtest/e2e/windows/gmsa_full.go:96�[0m �[38;5;14m�[1m[It] works end to end�[0m �[38;5;243mtest/e2e/windows/gmsa_full.go:97�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:03:02.188�[0m Jan 24 21:03:02.188: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-full-test-windows �[38;5;243m01/24/23 21:03:02.189�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:03:02.499�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:03:02.701�[0m [It] works end to end test/e2e/windows/gmsa_full.go:97 �[1mSTEP:�[0m finding the worker node that fulfills this test's assumptions �[38;5;243m01/24/23 21:03:02.906�[0m Jan 24 21:03:03.009: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:187 Jan 24 21:03:03.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gmsa-full-test-windows-2398" for this suite. �[38;5;243m01/24/23 21:03:03.124�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;14mExpected to find exactly one node with the "agentpool=windowsgmsa" label, found 0�[0m �[38;5;14mIn �[1m[It]�[0m�[38;5;14m at: �[1mtest/e2e/windows/gmsa_full.go:103�[0m �[38;5;14mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/windows.glob..func5.1.1() test/e2e/windows/gmsa_full.go:103 +0x5ea �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mPodTopologySpread Preemption�[0m �[1mvalidates proper pods are preempted�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:355�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:03:03.267�[0m Jan 24 21:03:03.267: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/24/23 21:03:03.268�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:03:03.589�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:03:03.793�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 24 21:03:04.317: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 21:04:05.076: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 �[1mSTEP:�[0m Trying to get 2 available nodes which can run pod �[38;5;243m01/24/23 21:04:05.179�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/24/23 21:04:05.179�[0m Jan 24 21:04:05.291: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:05.393: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 102.304008ms Jan 24 21:04:07.496: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205330729s Jan 24 21:04:09.496: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204796063s Jan 24 21:04:11.496: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.204823277s Jan 24 21:04:11.496: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/24/23 21:04:11.597�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/24/23 21:04:11.715�[0m Jan 24 21:04:11.822: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:11.924: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 102.300265ms Jan 24 21:04:14.027: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205105515s Jan 24 21:04:16.028: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206355907s Jan 24 21:04:18.027: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.205439206s Jan 24 21:04:18.027: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/24/23 21:04:18.13�[0m �[1mSTEP:�[0m Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. �[38;5;243m01/24/23 21:04:18.255�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-s4kcn. �[38;5;243m01/24/23 21:04:18.473�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-jzg2c. �[38;5;243m01/24/23 21:04:18.817�[0m [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 �[1mSTEP:�[0m Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. �[38;5;243m01/24/23 21:04:18.935�[0m Jan 24 21:04:19.040: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:19.143: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 102.837549ms Jan 24 21:04:21.247: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207011088s Jan 24 21:04:23.248: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207690605s Jan 24 21:04:25.251: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 6.210948978s Jan 24 21:04:25.251: INFO: Pod "high" satisfied condition "running" Jan 24 21:04:25.471: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:25.574: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 103.355149ms Jan 24 21:04:27.680: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208977647s Jan 24 21:04:29.679: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207920881s Jan 24 21:04:31.678: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 6.2076244s Jan 24 21:04:31.678: INFO: Pod "low-1" satisfied condition "running" Jan 24 21:04:32.075: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:32.178: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 102.557994ms Jan 24 21:04:34.282: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206742846s Jan 24 21:04:36.282: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206447075s Jan 24 21:04:38.284: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 6.208126861s Jan 24 21:04:38.284: INFO: Pod "low-2" satisfied condition "running" Jan 24 21:04:38.500: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:38.606: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 106.00005ms Jan 24 21:04:40.712: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211629964s Jan 24 21:04:42.711: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210160047s Jan 24 21:04:44.711: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 6.210671465s Jan 24 21:04:44.711: INFO: Pod "low-3" satisfied condition "running" �[1mSTEP:�[0m Create 1 Medium Pod with TopologySpreadConstraints �[38;5;243m01/24/23 21:04:44.814�[0m Jan 24 21:04:44.922: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:45.024: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 102.212357ms Jan 24 21:04:47.136: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21398773s Jan 24 21:04:49.127: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204861676s Jan 24 21:04:51.127: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205313616s Jan 24 21:04:53.129: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 8.206595957s Jan 24 21:04:53.129: INFO: Pod "medium" satisfied condition "running" �[1mSTEP:�[0m Verify there are 3 Pods left in this namespace �[38;5;243m01/24/23 21:04:53.232�[0m �[1mSTEP:�[0m Pod "high" is as expected to be running. �[38;5;243m01/24/23 21:04:53.336�[0m �[1mSTEP:�[0m Pod "low-1" is as expected to be running. �[38;5;243m01/24/23 21:04:53.336�[0m �[1mSTEP:�[0m Pod "medium" is as expected to be running. �[38;5;243m01/24/23 21:04:53.336�[0m [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-s4kcn �[38;5;243m01/24/23 21:04:53.336�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m01/24/23 21:04:53.55�[0m �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-jzg2c �[38;5;243m01/24/23 21:04:53.654�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m01/24/23 21:04:53.875�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 24 21:04:54.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-8901" for this suite. �[38;5;243m01/24/23 21:04:54.304�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","completed":30,"skipped":2479,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [111.681 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m PodTopologySpread Preemption �[38;5;243mtest/e2e/scheduling/preemption.go:316�[0m validates proper pods are preempted �[38;5;243mtest/e2e/scheduling/preemption.go:355�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:03:03.267�[0m Jan 24 21:03:03.267: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/24/23 21:03:03.268�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:03:03.589�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:03:03.793�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 24 21:03:04.317: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 21:04:05.076: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 �[1mSTEP:�[0m Trying to get 2 available nodes which can run pod �[38;5;243m01/24/23 21:04:05.179�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/24/23 21:04:05.179�[0m Jan 24 21:04:05.291: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:05.393: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 102.304008ms Jan 24 21:04:07.496: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205330729s Jan 24 21:04:09.496: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204796063s Jan 24 21:04:11.496: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.204823277s Jan 24 21:04:11.496: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/24/23 21:04:11.597�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m01/24/23 21:04:11.715�[0m Jan 24 21:04:11.822: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:11.924: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 102.300265ms Jan 24 21:04:14.027: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205105515s Jan 24 21:04:16.028: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206355907s Jan 24 21:04:18.027: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.205439206s Jan 24 21:04:18.027: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m01/24/23 21:04:18.13�[0m �[1mSTEP:�[0m Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. �[38;5;243m01/24/23 21:04:18.255�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-s4kcn. �[38;5;243m01/24/23 21:04:18.473�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-jzg2c. �[38;5;243m01/24/23 21:04:18.817�[0m [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 �[1mSTEP:�[0m Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. �[38;5;243m01/24/23 21:04:18.935�[0m Jan 24 21:04:19.040: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:19.143: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 102.837549ms Jan 24 21:04:21.247: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207011088s Jan 24 21:04:23.248: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207690605s Jan 24 21:04:25.251: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 6.210948978s Jan 24 21:04:25.251: INFO: Pod "high" satisfied condition "running" Jan 24 21:04:25.471: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:25.574: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 103.355149ms Jan 24 21:04:27.680: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208977647s Jan 24 21:04:29.679: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207920881s Jan 24 21:04:31.678: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 6.2076244s Jan 24 21:04:31.678: INFO: Pod "low-1" satisfied condition "running" Jan 24 21:04:32.075: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:32.178: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 102.557994ms Jan 24 21:04:34.282: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206742846s Jan 24 21:04:36.282: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206447075s Jan 24 21:04:38.284: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 6.208126861s Jan 24 21:04:38.284: INFO: Pod "low-2" satisfied condition "running" Jan 24 21:04:38.500: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:38.606: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 106.00005ms Jan 24 21:04:40.712: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211629964s Jan 24 21:04:42.711: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210160047s Jan 24 21:04:44.711: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 6.210671465s Jan 24 21:04:44.711: INFO: Pod "low-3" satisfied condition "running" �[1mSTEP:�[0m Create 1 Medium Pod with TopologySpreadConstraints �[38;5;243m01/24/23 21:04:44.814�[0m Jan 24 21:04:44.922: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-8901" to be "running" Jan 24 21:04:45.024: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 102.212357ms Jan 24 21:04:47.136: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21398773s Jan 24 21:04:49.127: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204861676s Jan 24 21:04:51.127: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205313616s Jan 24 21:04:53.129: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 8.206595957s Jan 24 21:04:53.129: INFO: Pod "medium" satisfied condition "running" �[1mSTEP:�[0m Verify there are 3 Pods left in this namespace �[38;5;243m01/24/23 21:04:53.232�[0m �[1mSTEP:�[0m Pod "high" is as expected to be running. �[38;5;243m01/24/23 21:04:53.336�[0m �[1mSTEP:�[0m Pod "low-1" is as expected to be running. �[38;5;243m01/24/23 21:04:53.336�[0m �[1mSTEP:�[0m Pod "medium" is as expected to be running. �[38;5;243m01/24/23 21:04:53.336�[0m [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-s4kcn �[38;5;243m01/24/23 21:04:53.336�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m01/24/23 21:04:53.55�[0m �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-jzg2c �[38;5;243m01/24/23 21:04:53.654�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m01/24/23 21:04:53.875�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 24 21:04:54.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-8901" for this suite. �[38;5;243m01/24/23 21:04:54.304�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mPriorityClass endpoints�[0m �[1mverify PriorityClass endpoints can be operated with different HTTP methods [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:733�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:04:54.956�[0m Jan 24 21:04:54.956: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/24/23 21:04:54.957�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:04:55.272�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:04:55.475�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 24 21:04:55.996: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 21:05:56.756: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:05:56.859�[0m Jan 24 21:05:56.860: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption-path �[38;5;243m01/24/23 21:05:56.861�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:05:57.171�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:05:57.376�[0m [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:733 Jan 24 21:05:57.894: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Jan 24 21:05:57.997: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/framework.go:187 Jan 24 21:05:58.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-path-4412" for this suite. �[38;5;243m01/24/23 21:05:58.624�[0m [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 24 21:05:58.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-8959" for this suite. �[38;5;243m01/24/23 21:05:58.954�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","completed":31,"skipped":2558,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [64.661 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m PriorityClass endpoints �[38;5;243mtest/e2e/scheduling/preemption.go:683�[0m verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:733�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:04:54.956�[0m Jan 24 21:04:54.956: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m01/24/23 21:04:54.957�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:04:55.272�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:04:55.475�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 24 21:04:55.996: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 21:05:56.756: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:05:56.859�[0m Jan 24 21:05:56.860: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption-path �[38;5;243m01/24/23 21:05:56.861�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:05:57.171�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:05:57.376�[0m [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:733 Jan 24 21:05:57.894: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Jan 24 21:05:57.997: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/framework.go:187 Jan 24 21:05:58.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-path-4412" for this suite. �[38;5;243m01/24/23 21:05:58.624�[0m [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Jan 24 21:05:58.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "sched-preemption-8959" for this suite. �[38;5;243m01/24/23 21:05:58.954�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Kubelet-Stats [Serial] �[38;5;243mKubelet stats collection for Windows nodes �[0mwhen running 10 pods�[0m �[1mshould return within 10 seconds�[0m �[38;5;243mtest/e2e/windows/kubelet_stats.go:47�[0m [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:05:59.618�[0m Jan 24 21:05:59.618: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename kubelet-stats-test-windows-serial �[38;5;243m01/24/23 21:05:59.62�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:05:59.93�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:06:00.133�[0m [It] should return within 10 seconds test/e2e/windows/kubelet_stats.go:47 �[1mSTEP:�[0m Selecting a Windows node �[38;5;243m01/24/23 21:06:00.336�[0m Jan 24 21:06:00.441: INFO: Using node: capz-conf-jzg2c �[1mSTEP:�[0m Scheduling 10 pods �[38;5;243m01/24/23 21:06:00.441�[0m Jan 24 21:06:00.557: INFO: Waiting up to 5m0s for pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.558: INFO: Waiting up to 5m0s for pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.560: INFO: Waiting up to 5m0s for pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.561: INFO: Waiting up to 5m0s for pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.659: INFO: Waiting up to 5m0s for pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.668: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 110.340146ms Jan 24 21:06:00.668: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.669: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 108.382713ms Jan 24 21:06:00.669: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.669: INFO: Waiting up to 5m0s for pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.669: INFO: Waiting up to 5m0s for pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.670: INFO: Waiting up to 5m0s for pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.670: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 109.61074ms Jan 24 21:06:00.670: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.671: INFO: Waiting up to 5m0s for pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.671: INFO: Waiting up to 5m0s for pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.674: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 116.277095ms Jan 24 21:06:00.674: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.762: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 102.483281ms Jan 24 21:06:00.762: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.772: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 103.31471ms Jan 24 21:06:00.772: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 103.161561ms Jan 24 21:06:00.772: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.772: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.773: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 103.168424ms Jan 24 21:06:00.773: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.779: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 108.527538ms Jan 24 21:06:00.779: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.780: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 108.924454ms Jan 24 21:06:00.780: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.770: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212610414s Jan 24 21:06:02.770: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.771: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210546115s Jan 24 21:06:02.771: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.774: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213287382s Jan 24 21:06:02.774: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.776: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218431142s Jan 24 21:06:02.776: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.865: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205531039s Jan 24 21:06:02.865: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205351772s Jan 24 21:06:02.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206980996s Jan 24 21:06:02.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.878: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209018058s Jan 24 21:06:02.878: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21136119s Jan 24 21:06:02.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212211035s Jan 24 21:06:02.883: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21132091s Jan 24 21:06:04.772: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.772: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214916077s Jan 24 21:06:04.772: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212574354s Jan 24 21:06:04.773: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218829014s Jan 24 21:06:04.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.865: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205759419s Jan 24 21:06:04.865: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.875: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206062252s Jan 24 21:06:04.875: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.875: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206494901s Jan 24 21:06:04.875: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205992358s Jan 24 21:06:04.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210930216s Jan 24 21:06:04.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.882: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211249067s Jan 24 21:06:04.882: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.773: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215988309s Jan 24 21:06:06.773: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.773: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213127235s Jan 24 21:06:06.773: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.775: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214851762s Jan 24 21:06:06.775: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.778: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22028213s Jan 24 21:06:06.778: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206254293s Jan 24 21:06:06.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206609114s Jan 24 21:06:06.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205824665s Jan 24 21:06:06.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.877: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207824772s Jan 24 21:06:06.877: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211859107s Jan 24 21:06:06.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212255551s Jan 24 21:06:06.883: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.771: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213730012s Jan 24 21:06:08.771: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211456104s Jan 24 21:06:08.772: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21255136s Jan 24 21:06:08.773: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.778: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219836611s Jan 24 21:06:08.778: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206360129s Jan 24 21:06:08.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206576769s Jan 24 21:06:08.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206498386s Jan 24 21:06:08.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.877: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206662702s Jan 24 21:06:08.877: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211716794s Jan 24 21:06:08.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.212034335s Jan 24 21:06:08.883: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.774: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.216800546s Jan 24 21:06:10.774: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.774: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213873549s Jan 24 21:06:10.774: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.775: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.214533141s Jan 24 21:06:10.775: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.219357923s Jan 24 21:06:10.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.865: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205441131s Jan 24 21:06:10.865: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.877: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207778431s Jan 24 21:06:10.877: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.877: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206691497s Jan 24 21:06:10.877: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.878: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.208846902s Jan 24 21:06:10.878: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.212691107s Jan 24 21:06:10.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.884: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.212987008s Jan 24 21:06:10.884: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.776: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.21899007s Jan 24 21:06:12.776: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.777: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.217040624s Jan 24 21:06:12.777: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.781: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.223713616s Jan 24 21:06:12.782: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.782: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.221357453s Jan 24 21:06:12.782: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.206238228s Jan 24 21:06:12.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.875: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.206357773s Jan 24 21:06:12.875: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.207280825s Jan 24 21:06:12.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.877: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.206443525s Jan 24 21:06:12.877: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.212294638s Jan 24 21:06:12.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.884: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.212993603s Jan 24 21:06:12.884: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.773: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.212263061s Jan 24 21:06:14.773: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.212674866s Jan 24 21:06:14.774: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.774: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.216661203s Jan 24 21:06:14.774: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.21886927s Jan 24 21:06:14.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.867: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.207147332s Jan 24 21:06:14.867: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.207323847s Jan 24 21:06:14.877: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.878: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.208967956s Jan 24 21:06:14.878: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.878: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.207917083s Jan 24 21:06:14.878: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.888: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.217518634s Jan 24 21:06:14.888: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.888: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.217676648s Jan 24 21:06:14.888: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.772: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.214288012s Jan 24 21:06:16.772: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.776: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.215820402s Jan 24 21:06:16.776: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.777: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.21667446s Jan 24 21:06:16.777: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.778: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.219882876s Jan 24 21:06:16.778: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.206199499s Jan 24 21:06:16.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.878: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.207563633s Jan 24 21:06:16.878: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.878: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.209210119s Jan 24 21:06:16.878: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.879: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.210129237s Jan 24 21:06:16.879: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.882: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.211196125s Jan 24 21:06:16.882: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.211955169s Jan 24 21:06:16.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.211808895s Jan 24 21:06:18.772: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.773: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.215644991s Jan 24 21:06:18.773: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.774: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.212988458s Jan 24 21:06:18.774: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.21948686s Jan 24 21:06:18.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.206310376s Jan 24 21:06:18.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.877: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.206679309s Jan 24 21:06:18.877: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.878: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.208799348s Jan 24 21:06:18.878: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.878: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.209266561s Jan 24 21:06:18.878: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.882: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.211790767s Jan 24 21:06:18.882: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.212434593s Jan 24 21:06:18.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.771: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.213294965s Jan 24 21:06:20.771: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.771: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.211106204s Jan 24 21:06:20.771: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.21262312s Jan 24 21:06:20.773: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.218972375s Jan 24 21:06:20.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.206276719s Jan 24 21:06:20.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.875: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.206138352s Jan 24 21:06:20.875: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.206777503s Jan 24 21:06:20.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.20616657s Jan 24 21:06:20.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.884: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.213430018s Jan 24 21:06:20.884: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.885: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.213936486s Jan 24 21:06:20.885: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.771: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.213691851s Jan 24 21:06:22.771: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.211531873s Jan 24 21:06:22.772: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.212925454s Jan 24 21:06:22.773: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.218958789s Jan 24 21:06:22.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.206495871s Jan 24 21:06:22.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.207248397s Jan 24 21:06:22.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.976: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.306184464s Jan 24 21:06:22.976: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:23.076: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.406988098s Jan 24 21:06:23.076: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:23.176: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.505453097s Jan 24 21:06:23.176: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:23.176: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.505657269s Jan 24 21:06:23.176: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.774: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.213913529s Jan 24 21:06:24.775: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.775: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.214707412s Jan 24 21:06:24.775: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.776: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 24.218596224s Jan 24 21:06:24.776: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.218934622s Jan 24 21:06:24.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.206202396s Jan 24 21:06:24.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.206946974s Jan 24 21:06:24.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.877: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.207891603s Jan 24 21:06:24.877: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.878: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.207504041s Jan 24 21:06:24.878: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.211906224s Jan 24 21:06:24.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.212885655s Jan 24 21:06:24.884: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.771: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 26.213660438s Jan 24 21:06:26.771: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.774: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.213367314s Jan 24 21:06:26.774: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.774: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.213543909s Jan 24 21:06:26.774: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.776: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.218619481s Jan 24 21:06:26.776: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.206391495s Jan 24 21:06:26.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.207035343s Jan 24 21:06:26.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.206307346s Jan 24 21:06:26.877: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.877: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.20809817s Jan 24 21:06:26.877: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.881: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 26.210814439s Jan 24 21:06:26.881: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 26.211586289s Jan 24 21:06:26.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.772: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.215239964s Jan 24 21:06:28.773: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.773: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.212643827s Jan 24 21:06:28.773: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.776: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.215943047s Jan 24 21:06:28.777: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.784: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.226195357s Jan 24 21:06:28.784: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.206302137s Jan 24 21:06:28.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.207152505s Jan 24 21:06:28.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.877: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.207845337s Jan 24 21:06:28.877: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.878: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.207535922s Jan 24 21:06:28.878: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.886: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.215755251s Jan 24 21:06:28.887: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.886: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.215683974s Jan 24 21:06:28.887: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.771: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.214019457s Jan 24 21:06:30.771: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Running", Reason="", readiness=true. Elapsed: 30.212410997s Jan 24 21:06:30.773: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Running (Ready = true) Jan 24 21:06:30.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9" satisfied condition "running and ready" Jan 24 21:06:30.774: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.213347225s Jan 24 21:06:30.774: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.776: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.218376288s Jan 24 21:06:30.776: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.870: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.210572439s Jan 24 21:06:30.870: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.207527432s Jan 24 21:06:30.877: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.879: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.208340117s Jan 24 21:06:30.879: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.879: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.2094219s Jan 24 21:06:30.879: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.212588782s Jan 24 21:06:30.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.884: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.213452537s Jan 24 21:06:30.884: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Running", Reason="", readiness=true. Elapsed: 32.211808356s Jan 24 21:06:32.772: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Running (Ready = true) Jan 24 21:06:32.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4" satisfied condition "running and ready" Jan 24 21:06:32.773: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Running", Reason="", readiness=true. Elapsed: 32.21567227s Jan 24 21:06:32.773: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Running (Ready = true) Jan 24 21:06:32.773: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2" satisfied condition "running and ready" Jan 24 21:06:32.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.219308675s Jan 24 21:06:32.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.865: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.205433579s Jan 24 21:06:32.865: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.888: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.218798239s Jan 24 21:06:32.888: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.889: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.219451444s Jan 24 21:06:32.889: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.890: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.219989451s Jan 24 21:06:32.890: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.891: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.220416248s Jan 24 21:06:32.891: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.895: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.2240167s Jan 24 21:06:32.895: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.778: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Running", Reason="", readiness=true. Elapsed: 34.220571987s Jan 24 21:06:34.778: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Running (Ready = true) Jan 24 21:06:34.778: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0" satisfied condition "running and ready" Jan 24 21:06:34.867: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.207923338s Jan 24 21:06:34.867: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.205530889s Jan 24 21:06:34.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 34.206850797s Jan 24 21:06:34.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.877: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.208062815s Jan 24 21:06:34.877: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 34.212221399s Jan 24 21:06:34.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 34.212814959s Jan 24 21:06:34.883: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:36.865: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.205703244s Jan 24 21:06:36.865: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:36.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Running", Reason="", readiness=true. Elapsed: 36.206739031s Jan 24 21:06:36.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Running (Ready = true) Jan 24 21:06:36.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7" satisfied condition "running and ready" Jan 24 21:06:36.877: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 36.207690825s Jan 24 21:06:36.877: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:36.878: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.207818352s Jan 24 21:06:36.878: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:36.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.211613723s Jan 24 21:06:36.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:36.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Running", Reason="", readiness=true. Elapsed: 36.212350467s Jan 24 21:06:36.883: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Running (Ready = true) Jan 24 21:06:36.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3" satisfied condition "running and ready" Jan 24 21:06:38.868: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.20833201s Jan 24 21:06:38.868: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:38.880: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.210619609s Jan 24 21:06:38.880: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:38.882: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 38.211407816s Jan 24 21:06:38.882: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:38.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.211468603s Jan 24 21:06:38.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:40.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Running", Reason="", readiness=true. Elapsed: 40.207050947s Jan 24 21:06:40.867: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Running (Ready = true) Jan 24 21:06:40.867: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5" satisfied condition "running and ready" Jan 24 21:06:40.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Running", Reason="", readiness=true. Elapsed: 40.205660069s Jan 24 21:06:40.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Running (Ready = true) Jan 24 21:06:40.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8" satisfied condition "running and ready" Jan 24 21:06:40.877: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.207521424s Jan 24 21:06:40.877: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:40.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 40.211352192s Jan 24 21:06:40.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:42.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Running", Reason="", readiness=true. Elapsed: 42.206492683s Jan 24 21:06:42.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Running (Ready = true) Jan 24 21:06:42.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1" satisfied condition "running and ready" Jan 24 21:06:42.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Running", Reason="", readiness=true. Elapsed: 42.211592659s Jan 24 21:06:42.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Running (Ready = true) Jan 24 21:06:42.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6" satisfied condition "running and ready" �[1mSTEP:�[0m Waiting up to 3 minutes for pods to be running �[38;5;243m01/24/23 21:06:42.986�[0m Jan 24 21:06:42.987: INFO: Waiting up to 3m0s for all pods (need at least 10) in namespace 'kubelet-stats-test-windows-serial-4573' to be running and ready Jan 24 21:06:43.305: INFO: 10 / 10 pods in namespace 'kubelet-stats-test-windows-serial-4573' are running and ready (0 seconds elapsed) Jan 24 21:06:43.305: INFO: expected 0 pod replicas in namespace 'kubelet-stats-test-windows-serial-4573', 0 are Running and Ready. �[1mSTEP:�[0m Getting kubelet stats 5 times and checking average duration �[38;5;243m01/24/23 21:06:43.305�[0m Jan 24 21:07:10.337: INFO: Getting kubelet stats for node capz-conf-jzg2c took an average of 404 milliseconds over 5 iterations [AfterEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:187 Jan 24 21:07:10.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "kubelet-stats-test-windows-serial-4573" for this suite. �[38;5;243m01/24/23 21:07:10.446�[0m {"msg":"PASSED [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","completed":32,"skipped":2563,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [70.935 seconds]�[0m [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] �[38;5;243mtest/e2e/windows/framework.go:27�[0m Kubelet stats collection for Windows nodes �[38;5;243mtest/e2e/windows/kubelet_stats.go:43�[0m when running 10 pods �[38;5;243mtest/e2e/windows/kubelet_stats.go:45�[0m should return within 10 seconds �[38;5;243mtest/e2e/windows/kubelet_stats.go:47�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:05:59.618�[0m Jan 24 21:05:59.618: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename kubelet-stats-test-windows-serial �[38;5;243m01/24/23 21:05:59.62�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:05:59.93�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:06:00.133�[0m [It] should return within 10 seconds test/e2e/windows/kubelet_stats.go:47 �[1mSTEP:�[0m Selecting a Windows node �[38;5;243m01/24/23 21:06:00.336�[0m Jan 24 21:06:00.441: INFO: Using node: capz-conf-jzg2c �[1mSTEP:�[0m Scheduling 10 pods �[38;5;243m01/24/23 21:06:00.441�[0m Jan 24 21:06:00.557: INFO: Waiting up to 5m0s for pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.558: INFO: Waiting up to 5m0s for pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.560: INFO: Waiting up to 5m0s for pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.561: INFO: Waiting up to 5m0s for pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.659: INFO: Waiting up to 5m0s for pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.668: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 110.340146ms Jan 24 21:06:00.668: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.669: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 108.382713ms Jan 24 21:06:00.669: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.669: INFO: Waiting up to 5m0s for pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.669: INFO: Waiting up to 5m0s for pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.670: INFO: Waiting up to 5m0s for pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.670: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 109.61074ms Jan 24 21:06:00.670: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.671: INFO: Waiting up to 5m0s for pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.671: INFO: Waiting up to 5m0s for pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6" in namespace "kubelet-stats-test-windows-serial-4573" to be "running and ready" Jan 24 21:06:00.674: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 116.277095ms Jan 24 21:06:00.674: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.762: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 102.483281ms Jan 24 21:06:00.762: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.772: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 103.31471ms Jan 24 21:06:00.772: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 103.161561ms Jan 24 21:06:00.772: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.772: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.773: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 103.168424ms Jan 24 21:06:00.773: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.779: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 108.527538ms Jan 24 21:06:00.779: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:00.780: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 108.924454ms Jan 24 21:06:00.780: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.770: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212610414s Jan 24 21:06:02.770: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.771: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210546115s Jan 24 21:06:02.771: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.774: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213287382s Jan 24 21:06:02.774: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.776: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218431142s Jan 24 21:06:02.776: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.865: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205531039s Jan 24 21:06:02.865: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205351772s Jan 24 21:06:02.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206980996s Jan 24 21:06:02.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.878: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209018058s Jan 24 21:06:02.878: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21136119s Jan 24 21:06:02.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:02.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212211035s Jan 24 21:06:02.883: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21132091s Jan 24 21:06:04.772: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.772: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214916077s Jan 24 21:06:04.772: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212574354s Jan 24 21:06:04.773: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218829014s Jan 24 21:06:04.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.865: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205759419s Jan 24 21:06:04.865: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.875: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206062252s Jan 24 21:06:04.875: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.875: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206494901s Jan 24 21:06:04.875: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205992358s Jan 24 21:06:04.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210930216s Jan 24 21:06:04.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:04.882: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211249067s Jan 24 21:06:04.882: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.773: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215988309s Jan 24 21:06:06.773: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.773: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213127235s Jan 24 21:06:06.773: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.775: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214851762s Jan 24 21:06:06.775: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.778: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22028213s Jan 24 21:06:06.778: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206254293s Jan 24 21:06:06.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206609114s Jan 24 21:06:06.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205824665s Jan 24 21:06:06.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.877: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207824772s Jan 24 21:06:06.877: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211859107s Jan 24 21:06:06.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:06.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212255551s Jan 24 21:06:06.883: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.771: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213730012s Jan 24 21:06:08.771: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211456104s Jan 24 21:06:08.772: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21255136s Jan 24 21:06:08.773: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.778: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219836611s Jan 24 21:06:08.778: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206360129s Jan 24 21:06:08.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206576769s Jan 24 21:06:08.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206498386s Jan 24 21:06:08.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.877: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206662702s Jan 24 21:06:08.877: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211716794s Jan 24 21:06:08.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:08.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.212034335s Jan 24 21:06:08.883: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.774: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.216800546s Jan 24 21:06:10.774: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.774: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213873549s Jan 24 21:06:10.774: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.775: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.214533141s Jan 24 21:06:10.775: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.219357923s Jan 24 21:06:10.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.865: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205441131s Jan 24 21:06:10.865: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.877: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207778431s Jan 24 21:06:10.877: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.877: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206691497s Jan 24 21:06:10.877: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.878: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.208846902s Jan 24 21:06:10.878: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.212691107s Jan 24 21:06:10.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:10.884: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.212987008s Jan 24 21:06:10.884: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.776: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.21899007s Jan 24 21:06:12.776: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.777: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.217040624s Jan 24 21:06:12.777: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.781: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.223713616s Jan 24 21:06:12.782: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.782: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.221357453s Jan 24 21:06:12.782: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.206238228s Jan 24 21:06:12.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.875: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.206357773s Jan 24 21:06:12.875: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.207280825s Jan 24 21:06:12.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.877: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.206443525s Jan 24 21:06:12.877: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.212294638s Jan 24 21:06:12.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:12.884: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.212993603s Jan 24 21:06:12.884: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.773: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.212263061s Jan 24 21:06:14.773: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.212674866s Jan 24 21:06:14.774: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.774: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.216661203s Jan 24 21:06:14.774: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.21886927s Jan 24 21:06:14.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.867: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.207147332s Jan 24 21:06:14.867: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.207323847s Jan 24 21:06:14.877: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.878: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.208967956s Jan 24 21:06:14.878: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.878: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.207917083s Jan 24 21:06:14.878: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.888: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.217518634s Jan 24 21:06:14.888: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:14.888: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.217676648s Jan 24 21:06:14.888: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.772: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.214288012s Jan 24 21:06:16.772: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.776: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.215820402s Jan 24 21:06:16.776: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.777: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.21667446s Jan 24 21:06:16.777: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.778: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.219882876s Jan 24 21:06:16.778: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.206199499s Jan 24 21:06:16.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.878: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.207563633s Jan 24 21:06:16.878: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.878: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.209210119s Jan 24 21:06:16.878: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.879: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.210129237s Jan 24 21:06:16.879: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.882: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.211196125s Jan 24 21:06:16.882: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:16.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.211955169s Jan 24 21:06:16.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.211808895s Jan 24 21:06:18.772: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.773: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.215644991s Jan 24 21:06:18.773: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.774: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.212988458s Jan 24 21:06:18.774: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.21948686s Jan 24 21:06:18.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.206310376s Jan 24 21:06:18.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.877: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.206679309s Jan 24 21:06:18.877: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.878: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.208799348s Jan 24 21:06:18.878: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.878: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.209266561s Jan 24 21:06:18.878: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.882: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.211790767s Jan 24 21:06:18.882: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:18.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.212434593s Jan 24 21:06:18.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.771: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.213294965s Jan 24 21:06:20.771: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.771: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.211106204s Jan 24 21:06:20.771: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.21262312s Jan 24 21:06:20.773: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.218972375s Jan 24 21:06:20.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.206276719s Jan 24 21:06:20.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.875: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.206138352s Jan 24 21:06:20.875: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.206777503s Jan 24 21:06:20.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.20616657s Jan 24 21:06:20.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.884: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.213430018s Jan 24 21:06:20.884: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:20.885: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.213936486s Jan 24 21:06:20.885: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.771: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.213691851s Jan 24 21:06:22.771: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.211531873s Jan 24 21:06:22.772: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.212925454s Jan 24 21:06:22.773: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.218958789s Jan 24 21:06:22.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.206495871s Jan 24 21:06:22.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.207248397s Jan 24 21:06:22.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:22.976: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.306184464s Jan 24 21:06:22.976: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:23.076: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.406988098s Jan 24 21:06:23.076: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:23.176: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.505453097s Jan 24 21:06:23.176: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:23.176: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.505657269s Jan 24 21:06:23.176: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.774: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.213913529s Jan 24 21:06:24.775: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.775: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.214707412s Jan 24 21:06:24.775: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.776: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 24.218596224s Jan 24 21:06:24.776: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.218934622s Jan 24 21:06:24.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.206202396s Jan 24 21:06:24.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.206946974s Jan 24 21:06:24.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.877: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.207891603s Jan 24 21:06:24.877: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.878: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.207504041s Jan 24 21:06:24.878: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.211906224s Jan 24 21:06:24.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:24.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.212885655s Jan 24 21:06:24.884: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.771: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 26.213660438s Jan 24 21:06:26.771: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.774: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.213367314s Jan 24 21:06:26.774: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.774: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.213543909s Jan 24 21:06:26.774: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.776: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.218619481s Jan 24 21:06:26.776: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.206391495s Jan 24 21:06:26.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.207035343s Jan 24 21:06:26.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.206307346s Jan 24 21:06:26.877: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.877: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.20809817s Jan 24 21:06:26.877: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.881: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 26.210814439s Jan 24 21:06:26.881: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:26.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 26.211586289s Jan 24 21:06:26.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.772: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.215239964s Jan 24 21:06:28.773: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.773: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.212643827s Jan 24 21:06:28.773: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.776: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.215943047s Jan 24 21:06:28.777: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.784: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.226195357s Jan 24 21:06:28.784: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.206302137s Jan 24 21:06:28.866: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.207152505s Jan 24 21:06:28.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.877: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.207845337s Jan 24 21:06:28.877: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.878: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.207535922s Jan 24 21:06:28.878: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.886: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.215755251s Jan 24 21:06:28.887: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:28.886: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.215683974s Jan 24 21:06:28.887: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.771: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.214019457s Jan 24 21:06:30.771: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9": Phase="Running", Reason="", readiness=true. Elapsed: 30.212410997s Jan 24 21:06:30.773: INFO: The phase of Pod statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9 is Running (Ready = true) Jan 24 21:06:30.773: INFO: Pod "statscollectiontest-741bad60-b571-4ef7-bf75-9ad1e4182832-9" satisfied condition "running and ready" Jan 24 21:06:30.774: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.213347225s Jan 24 21:06:30.774: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.776: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.218376288s Jan 24 21:06:30.776: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.870: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.210572439s Jan 24 21:06:30.870: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.207527432s Jan 24 21:06:30.877: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.879: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.208340117s Jan 24 21:06:30.879: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.879: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.2094219s Jan 24 21:06:30.879: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.212588782s Jan 24 21:06:30.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:30.884: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.213452537s Jan 24 21:06:30.884: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4": Phase="Running", Reason="", readiness=true. Elapsed: 32.211808356s Jan 24 21:06:32.772: INFO: The phase of Pod statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4 is Running (Ready = true) Jan 24 21:06:32.772: INFO: Pod "statscollectiontest-f184494e-f8f4-44b1-aa54-953b56d4a6ba-4" satisfied condition "running and ready" Jan 24 21:06:32.773: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2": Phase="Running", Reason="", readiness=true. Elapsed: 32.21567227s Jan 24 21:06:32.773: INFO: The phase of Pod statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2 is Running (Ready = true) Jan 24 21:06:32.773: INFO: Pod "statscollectiontest-b6cf2eee-7ccf-4790-bf58-815daf7a2935-2" satisfied condition "running and ready" Jan 24 21:06:32.777: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.219308675s Jan 24 21:06:32.777: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.865: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.205433579s Jan 24 21:06:32.865: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.888: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.218798239s Jan 24 21:06:32.888: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.889: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.219451444s Jan 24 21:06:32.889: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.890: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.219989451s Jan 24 21:06:32.890: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.891: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.220416248s Jan 24 21:06:32.891: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:32.895: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.2240167s Jan 24 21:06:32.895: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.778: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0": Phase="Running", Reason="", readiness=true. Elapsed: 34.220571987s Jan 24 21:06:34.778: INFO: The phase of Pod statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0 is Running (Ready = true) Jan 24 21:06:34.778: INFO: Pod "statscollectiontest-a2cc1713-e54c-4d5c-b5d9-d617f4a0e120-0" satisfied condition "running and ready" Jan 24 21:06:34.867: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.207923338s Jan 24 21:06:34.867: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.205530889s Jan 24 21:06:34.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 34.206850797s Jan 24 21:06:34.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.877: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.208062815s Jan 24 21:06:34.877: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 34.212221399s Jan 24 21:06:34.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:34.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Pending", Reason="", readiness=false. Elapsed: 34.212814959s Jan 24 21:06:34.883: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:36.865: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.205703244s Jan 24 21:06:36.865: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:36.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7": Phase="Running", Reason="", readiness=true. Elapsed: 36.206739031s Jan 24 21:06:36.876: INFO: The phase of Pod statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7 is Running (Ready = true) Jan 24 21:06:36.876: INFO: Pod "statscollectiontest-6f5a23fa-8d87-4511-9631-6bc23e472c8c-7" satisfied condition "running and ready" Jan 24 21:06:36.877: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 36.207690825s Jan 24 21:06:36.877: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:36.878: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.207818352s Jan 24 21:06:36.878: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:36.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.211613723s Jan 24 21:06:36.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:36.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3": Phase="Running", Reason="", readiness=true. Elapsed: 36.212350467s Jan 24 21:06:36.883: INFO: The phase of Pod statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3 is Running (Ready = true) Jan 24 21:06:36.883: INFO: Pod "statscollectiontest-5a8266cd-d9a5-4447-9a6b-06ac3005603d-3" satisfied condition "running and ready" Jan 24 21:06:38.868: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.20833201s Jan 24 21:06:38.868: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:38.880: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.210619609s Jan 24 21:06:38.880: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:38.882: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Pending", Reason="", readiness=false. Elapsed: 38.211407816s Jan 24 21:06:38.882: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:38.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.211468603s Jan 24 21:06:38.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:40.866: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5": Phase="Running", Reason="", readiness=true. Elapsed: 40.207050947s Jan 24 21:06:40.867: INFO: The phase of Pod statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5 is Running (Ready = true) Jan 24 21:06:40.867: INFO: Pod "statscollectiontest-eb542700-8195-42fa-87d3-da80f86bed95-5" satisfied condition "running and ready" Jan 24 21:06:40.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8": Phase="Running", Reason="", readiness=true. Elapsed: 40.205660069s Jan 24 21:06:40.876: INFO: The phase of Pod statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8 is Running (Ready = true) Jan 24 21:06:40.876: INFO: Pod "statscollectiontest-0838b599-8bb6-4107-af76-1242b85ccda3-8" satisfied condition "running and ready" Jan 24 21:06:40.877: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.207521424s Jan 24 21:06:40.877: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:40.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Pending", Reason="", readiness=false. Elapsed: 40.211352192s Jan 24 21:06:40.882: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Pending, waiting for it to be Running (with Ready = true) Jan 24 21:06:42.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1": Phase="Running", Reason="", readiness=true. Elapsed: 42.206492683s Jan 24 21:06:42.876: INFO: The phase of Pod statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1 is Running (Ready = true) Jan 24 21:06:42.876: INFO: Pod "statscollectiontest-74b9652c-3a99-407d-8372-800ad8320440-1" satisfied condition "running and ready" Jan 24 21:06:42.882: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6": Phase="Running", Reason="", readiness=true. Elapsed: 42.211592659s Jan 24 21:06:42.883: INFO: The phase of Pod statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6 is Running (Ready = true) Jan 24 21:06:42.883: INFO: Pod "statscollectiontest-746eaf88-0f72-47b0-be96-c9cf5be00ab7-6" satisfied condition "running and ready" �[1mSTEP:�[0m Waiting up to 3 minutes for pods to be running �[38;5;243m01/24/23 21:06:42.986�[0m Jan 24 21:06:42.987: INFO: Waiting up to 3m0s for all pods (need at least 10) in namespace 'kubelet-stats-test-windows-serial-4573' to be running and ready Jan 24 21:06:43.305: INFO: 10 / 10 pods in namespace 'kubelet-stats-test-windows-serial-4573' are running and ready (0 seconds elapsed) Jan 24 21:06:43.305: INFO: expected 0 pod replicas in namespace 'kubelet-stats-test-windows-serial-4573', 0 are Running and Ready. �[1mSTEP:�[0m Getting kubelet stats 5 times and checking average duration �[38;5;243m01/24/23 21:06:43.305�[0m Jan 24 21:07:10.337: INFO: Getting kubelet stats for node capz-conf-jzg2c took an average of 404 milliseconds over 5 iterations [AfterEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:187 Jan 24 21:07:10.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "kubelet-stats-test-windows-serial-4573" for this suite. �[38;5;243m01/24/23 21:07:10.446�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mReplicationController light�[0m �[1mShould scale from 2 pods to 1 pod [Slow]�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:82�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:07:10.557�[0m Jan 24 21:07:10.557: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:07:10.559�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:07:10.874�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:07:11.081�[0m [It] Should scale from 2 pods to 1 pod [Slow] test/e2e/autoscaling/horizontal_pod_autoscaling.go:82 �[1mSTEP:�[0m Running consuming RC rc-light via /v1, Kind=ReplicationController with 2 replicas �[38;5;243m01/24/23 21:07:11.284�[0m �[1mSTEP:�[0m creating replication controller rc-light in namespace horizontal-pod-autoscaling-8607 �[38;5;243m01/24/23 21:07:11.403�[0m I0124 21:07:11.511805 14 runners.go:193] Created replication controller with name: rc-light, namespace: horizontal-pod-autoscaling-8607, replica count: 2 I0124 21:07:21.662574 14 runners.go:193] rc-light Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:07:21.662�[0m �[1mSTEP:�[0m creating replication controller rc-light-ctrl in namespace horizontal-pod-autoscaling-8607 �[38;5;243m01/24/23 21:07:21.779�[0m I0124 21:07:21.885879 14 runners.go:193] Created replication controller with name: rc-light-ctrl, namespace: horizontal-pod-autoscaling-8607, replica count: 1 I0124 21:07:32.036755 14 runners.go:193] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:07:37.037: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1 Jan 24 21:07:37.140: INFO: RC rc-light: consume 50 millicores in total Jan 24 21:07:37.140: INFO: RC rc-light: setting consumption to 50 millicores in total Jan 24 21:07:37.140: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:07:37.140: INFO: RC rc-light: consume 0 MB in total Jan 24 21:07:37.140: INFO: RC rc-light: consume custom metric 0 in total Jan 24 21:07:37.140: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:07:37.140: INFO: RC rc-light: disabling mem consumption Jan 24 21:07:37.140: INFO: RC rc-light: disabling consumption of custom metric QPS Jan 24 21:07:37.442: INFO: waiting for 1 replicas (current: 2) Jan 24 21:07:57.546: INFO: waiting for 1 replicas (current: 2) Jan 24 21:08:07.342: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:08:07.342: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:08:17.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:08:37.453: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:08:37.453: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:08:37.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:08:57.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:09:07.564: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:09:07.564: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:09:17.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:09:37.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:09:37.675: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:09:37.675: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:09:57.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:10:07.785: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:10:07.785: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:10:17.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:10:37.546: INFO: waiting for 1 replicas (current: 2) Jan 24 21:10:37.903: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:10:37.903: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:10:57.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:11:08.012: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:11:08.012: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:11:17.546: INFO: waiting for 1 replicas (current: 2) Jan 24 21:11:37.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:11:38.124: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:11:38.124: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:11:57.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:12:08.235: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:12:08.235: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:12:17.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:12:37.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:12:38.346: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:12:38.347: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:12:57.547: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc-light �[38;5;243m01/24/23 21:12:57.655�[0m Jan 24 21:12:57.655: INFO: RC rc-light: stopping metric consumer Jan 24 21:12:57.655: INFO: RC rc-light: stopping mem consumer Jan 24 21:12:57.655: INFO: RC rc-light: stopping CPU consumer �[1mSTEP:�[0m deleting ReplicationController rc-light in namespace horizontal-pod-autoscaling-8607, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:13:07.656�[0m Jan 24 21:13:08.019: INFO: Deleting ReplicationController rc-light took: 108.645136ms Jan 24 21:13:08.120: INFO: Terminating ReplicationController rc-light pods took: 101.113807ms �[1mSTEP:�[0m deleting ReplicationController rc-light-ctrl in namespace horizontal-pod-autoscaling-8607, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:13:10.256�[0m Jan 24 21:13:10.619: INFO: Deleting ReplicationController rc-light-ctrl took: 109.444898ms Jan 24 21:13:10.720: INFO: Terminating ReplicationController rc-light-ctrl pods took: 100.987386ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 21:13:12.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-8607" for this suite. �[38;5;243m01/24/23 21:13:12.872�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]","completed":33,"skipped":2600,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [362.422 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m ReplicationController light �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:69�[0m Should scale from 2 pods to 1 pod [Slow] �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:82�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:07:10.557�[0m Jan 24 21:07:10.557: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:07:10.559�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:07:10.874�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:07:11.081�[0m [It] Should scale from 2 pods to 1 pod [Slow] test/e2e/autoscaling/horizontal_pod_autoscaling.go:82 �[1mSTEP:�[0m Running consuming RC rc-light via /v1, Kind=ReplicationController with 2 replicas �[38;5;243m01/24/23 21:07:11.284�[0m �[1mSTEP:�[0m creating replication controller rc-light in namespace horizontal-pod-autoscaling-8607 �[38;5;243m01/24/23 21:07:11.403�[0m I0124 21:07:11.511805 14 runners.go:193] Created replication controller with name: rc-light, namespace: horizontal-pod-autoscaling-8607, replica count: 2 I0124 21:07:21.662574 14 runners.go:193] rc-light Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:07:21.662�[0m �[1mSTEP:�[0m creating replication controller rc-light-ctrl in namespace horizontal-pod-autoscaling-8607 �[38;5;243m01/24/23 21:07:21.779�[0m I0124 21:07:21.885879 14 runners.go:193] Created replication controller with name: rc-light-ctrl, namespace: horizontal-pod-autoscaling-8607, replica count: 1 I0124 21:07:32.036755 14 runners.go:193] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:07:37.037: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1 Jan 24 21:07:37.140: INFO: RC rc-light: consume 50 millicores in total Jan 24 21:07:37.140: INFO: RC rc-light: setting consumption to 50 millicores in total Jan 24 21:07:37.140: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:07:37.140: INFO: RC rc-light: consume 0 MB in total Jan 24 21:07:37.140: INFO: RC rc-light: consume custom metric 0 in total Jan 24 21:07:37.140: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:07:37.140: INFO: RC rc-light: disabling mem consumption Jan 24 21:07:37.140: INFO: RC rc-light: disabling consumption of custom metric QPS Jan 24 21:07:37.442: INFO: waiting for 1 replicas (current: 2) Jan 24 21:07:57.546: INFO: waiting for 1 replicas (current: 2) Jan 24 21:08:07.342: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:08:07.342: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:08:17.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:08:37.453: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:08:37.453: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:08:37.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:08:57.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:09:07.564: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:09:07.564: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:09:17.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:09:37.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:09:37.675: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:09:37.675: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:09:57.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:10:07.785: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:10:07.785: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:10:17.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:10:37.546: INFO: waiting for 1 replicas (current: 2) Jan 24 21:10:37.903: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:10:37.903: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:10:57.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:11:08.012: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:11:08.012: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:11:17.546: INFO: waiting for 1 replicas (current: 2) Jan 24 21:11:37.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:11:38.124: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:11:38.124: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:11:57.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:12:08.235: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:12:08.235: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:12:17.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:12:37.545: INFO: waiting for 1 replicas (current: 2) Jan 24 21:12:38.346: INFO: RC rc-light: sending request to consume 50 millicores Jan 24 21:12:38.347: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8607/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 24 21:12:57.547: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc-light �[38;5;243m01/24/23 21:12:57.655�[0m Jan 24 21:12:57.655: INFO: RC rc-light: stopping metric consumer Jan 24 21:12:57.655: INFO: RC rc-light: stopping mem consumer Jan 24 21:12:57.655: INFO: RC rc-light: stopping CPU consumer �[1mSTEP:�[0m deleting ReplicationController rc-light in namespace horizontal-pod-autoscaling-8607, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:13:07.656�[0m Jan 24 21:13:08.019: INFO: Deleting ReplicationController rc-light took: 108.645136ms Jan 24 21:13:08.120: INFO: Terminating ReplicationController rc-light pods took: 101.113807ms �[1mSTEP:�[0m deleting ReplicationController rc-light-ctrl in namespace horizontal-pod-autoscaling-8607, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:13:10.256�[0m Jan 24 21:13:10.619: INFO: Deleting ReplicationController rc-light-ctrl took: 109.444898ms Jan 24 21:13:10.720: INFO: Terminating ReplicationController rc-light-ctrl pods took: 100.987386ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 21:13:12.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-8607" for this suite. �[38;5;243m01/24/23 21:13:12.872�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould list and delete a collection of DaemonSets [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:822�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:13:12.986�[0m Jan 24 21:13:12.986: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/24/23 21:13:12.987�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:13:13.305�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:13:13.51�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:822 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m01/24/23 21:13:14.142�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/24/23 21:13:14.25�[0m Jan 24 21:13:14.358: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:14.463: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 21:13:14.463: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:13:15.573: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:15.677: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 21:13:15.677: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:13:16.571: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:16.675: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 21:13:16.675: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:13:17.572: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:17.676: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 21:13:17.676: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:13:18.571: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:18.676: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 21:13:18.676: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:13:19.572: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:19.675: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 21:13:19.675: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 21:13:20.572: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:20.677: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 21:13:20.677: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m listing all DeamonSets �[38;5;243m01/24/23 21:13:20.78�[0m �[1mSTEP:�[0m DeleteCollection of the DaemonSets �[38;5;243m01/24/23 21:13:20.885�[0m �[1mSTEP:�[0m Verify that ReplicaSets have been deleted �[38;5;243m01/24/23 21:13:20.992�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 Jan 24 21:13:21.303: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"25548"},"items":null} Jan 24 21:13:21.407: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"25548"},"items":[{"metadata":{"name":"daemon-set-6d6b7","generateName":"daemon-set-","namespace":"daemonsets-7619","uid":"1318243d-eccf-46d2-a998-953d383dfd10","resourceVersion":"25546","creationTimestamp":"2023-01-24T21:13:14Z","deletionTimestamp":"2023-01-24T21:13:50Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"eb70249944a58b68baa0f88eec3480a23e2a92b910dc3fa6149234b00c46f4f5","cni.projectcalico.org/podIP":"192.168.11.50/32","cni.projectcalico.org/podIPs":"192.168.11.50/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"64752f98-888d-49fd-b382-225f8567d868","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"64752f98-888d-49fd-b382-225f8567d868\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.11.50\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-vjljn","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-vjljn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-s4kcn","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-s4kcn"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:14Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:19Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:19Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:14Z"}],"hostIP":"10.1.0.5","podIP":"192.168.11.50","podIPs":[{"ip":"192.168.11.50"}],"startTime":"2023-01-24T21:13:14Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-24T21:13:19Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://97ccc9c09b52bc1571d8aef6425718da47a249749a59868151444aa703fafb67","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-vtpbx","generateName":"daemon-set-","namespace":"daemonsets-7619","uid":"8ea014a3-edd2-447b-96de-e14979be6a7a","resourceVersion":"25547","creationTimestamp":"2023-01-24T21:13:14Z","deletionTimestamp":"2023-01-24T21:13:50Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"0781ef00ef10aceac914c4fdb21473c86c5c282dc3c85851b6a274b4e3ae1396","cni.projectcalico.org/podIP":"192.168.211.4/32","cni.projectcalico.org/podIPs":"192.168.211.4/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"64752f98-888d-49fd-b382-225f8567d868","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"64752f98-888d-49fd-b382-225f8567d868\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.211.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-l2wkp","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-l2wkp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-jzg2c","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-jzg2c"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:14Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:19Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:19Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:14Z"}],"hostIP":"10.1.0.4","podIP":"192.168.211.4","podIPs":[{"ip":"192.168.211.4"}],"startTime":"2023-01-24T21:13:14Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-24T21:13:18Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://93a77e2c101950df801cea769df03657db84b75fe803277691526ab163581e99","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 24 21:13:21.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-7619" for this suite. �[38;5;243m01/24/23 21:13:21.831�[0m {"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","completed":34,"skipped":2699,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [8.955 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should list and delete a collection of DaemonSets [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:822�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:13:12.986�[0m Jan 24 21:13:12.986: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m01/24/23 21:13:12.987�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:13:13.305�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:13:13.51�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:822 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m01/24/23 21:13:14.142�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m01/24/23 21:13:14.25�[0m Jan 24 21:13:14.358: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:14.463: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 21:13:14.463: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:13:15.573: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:15.677: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 21:13:15.677: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:13:16.571: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:16.675: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 21:13:16.675: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:13:17.572: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:17.676: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 21:13:17.676: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:13:18.571: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:18.676: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 24 21:13:18.676: INFO: Node capz-conf-jzg2c is running 0 daemon pod, expected 1 Jan 24 21:13:19.572: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:19.675: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 24 21:13:19.675: INFO: Node capz-conf-s4kcn is running 0 daemon pod, expected 1 Jan 24 21:13:20.572: INFO: DaemonSet pods can't tolerate node capz-conf-a7mu8n-control-plane-46cr5 with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 21:13:20.677: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 24 21:13:20.677: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m listing all DeamonSets �[38;5;243m01/24/23 21:13:20.78�[0m �[1mSTEP:�[0m DeleteCollection of the DaemonSets �[38;5;243m01/24/23 21:13:20.885�[0m �[1mSTEP:�[0m Verify that ReplicaSets have been deleted �[38;5;243m01/24/23 21:13:20.992�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 Jan 24 21:13:21.303: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"25548"},"items":null} Jan 24 21:13:21.407: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"25548"},"items":[{"metadata":{"name":"daemon-set-6d6b7","generateName":"daemon-set-","namespace":"daemonsets-7619","uid":"1318243d-eccf-46d2-a998-953d383dfd10","resourceVersion":"25546","creationTimestamp":"2023-01-24T21:13:14Z","deletionTimestamp":"2023-01-24T21:13:50Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"eb70249944a58b68baa0f88eec3480a23e2a92b910dc3fa6149234b00c46f4f5","cni.projectcalico.org/podIP":"192.168.11.50/32","cni.projectcalico.org/podIPs":"192.168.11.50/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"64752f98-888d-49fd-b382-225f8567d868","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"64752f98-888d-49fd-b382-225f8567d868\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.11.50\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-vjljn","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-vjljn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-s4kcn","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-s4kcn"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:14Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:19Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:19Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:14Z"}],"hostIP":"10.1.0.5","podIP":"192.168.11.50","podIPs":[{"ip":"192.168.11.50"}],"startTime":"2023-01-24T21:13:14Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-24T21:13:19Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://97ccc9c09b52bc1571d8aef6425718da47a249749a59868151444aa703fafb67","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-vtpbx","generateName":"daemon-set-","namespace":"daemonsets-7619","uid":"8ea014a3-edd2-447b-96de-e14979be6a7a","resourceVersion":"25547","creationTimestamp":"2023-01-24T21:13:14Z","deletionTimestamp":"2023-01-24T21:13:50Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"0781ef00ef10aceac914c4fdb21473c86c5c282dc3c85851b6a274b4e3ae1396","cni.projectcalico.org/podIP":"192.168.211.4/32","cni.projectcalico.org/podIPs":"192.168.211.4/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"64752f98-888d-49fd-b382-225f8567d868","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"64752f98-888d-49fd-b382-225f8567d868\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-24T21:13:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.211.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-l2wkp","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-l2wkp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-jzg2c","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-jzg2c"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:14Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:19Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:19Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-24T21:13:14Z"}],"hostIP":"10.1.0.4","podIP":"192.168.211.4","podIPs":[{"ip":"192.168.211.4"}],"startTime":"2023-01-24T21:13:14Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-24T21:13:18Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://93a77e2c101950df801cea769df03657db84b75fe803277691526ab163581e99","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Jan 24 21:13:21.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "daemonsets-7619" for this suite. �[38;5;243m01/24/23 21:13:21.831�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould not schedule jobs when suspended [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/apps/cronjob.go:96�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:13:21.946�[0m Jan 24 21:13:21.946: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename cronjob �[38;5;243m01/24/23 21:13:21.947�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:13:22.263�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:13:22.468�[0m [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 �[1mSTEP:�[0m Creating a suspended cronjob �[38;5;243m01/24/23 21:13:22.673�[0m �[1mSTEP:�[0m Ensuring no jobs are scheduled �[38;5;243m01/24/23 21:13:22.782�[0m �[1mSTEP:�[0m Ensuring no job exists by listing jobs explicitly �[38;5;243m01/24/23 21:18:22.987�[0m �[1mSTEP:�[0m Removing cronjob �[38;5;243m01/24/23 21:18:23.09�[0m [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 Jan 24 21:18:23.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "cronjob-5881" for this suite. �[38;5;243m01/24/23 21:18:23.305�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","completed":35,"skipped":2734,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [301.467 seconds]�[0m [sig-apps] CronJob �[38;5;243mtest/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[38;5;243mtest/e2e/apps/cronjob.go:96�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:13:21.946�[0m Jan 24 21:13:21.946: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename cronjob �[38;5;243m01/24/23 21:13:21.947�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:13:22.263�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:13:22.468�[0m [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/apps/cronjob.go:96 �[1mSTEP:�[0m Creating a suspended cronjob �[38;5;243m01/24/23 21:13:22.673�[0m �[1mSTEP:�[0m Ensuring no jobs are scheduled �[38;5;243m01/24/23 21:13:22.782�[0m �[1mSTEP:�[0m Ensuring no job exists by listing jobs explicitly �[38;5;243m01/24/23 21:18:22.987�[0m �[1mSTEP:�[0m Removing cronjob �[38;5;243m01/24/23 21:18:23.09�[0m [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 Jan 24 21:18:23.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "cronjob-5881" for this suite. �[38;5;243m01/24/23 21:18:23.305�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould succeed in writing subpaths in container [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/common/node/expansion.go:296�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:18:23.417�[0m Jan 24 21:18:23.417: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/24/23 21:18:23.418�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:18:23.63�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:18:23.833�[0m [It] should succeed in writing subpaths in container [Slow] [Conformance] test/e2e/common/node/expansion.go:296 �[1mSTEP:�[0m creating the pod �[38;5;243m01/24/23 21:18:24.037�[0m �[1mSTEP:�[0m waiting for pod running �[38;5;243m01/24/23 21:18:24.15�[0m Jan 24 21:18:24.150: INFO: Waiting up to 2m0s for pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" in namespace "var-expansion-3424" to be "running" Jan 24 21:18:24.252: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.338957ms Jan 24 21:18:26.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206573058s Jan 24 21:18:28.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205951738s Jan 24 21:18:30.357: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207360482s Jan 24 21:18:32.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20660787s Jan 24 21:18:34.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206155124s Jan 24 21:18:36.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.205751022s Jan 24 21:18:38.358: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.208216949s Jan 24 21:18:40.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Running", Reason="", readiness=true. Elapsed: 16.206084217s Jan 24 21:18:40.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" satisfied condition "running" �[1mSTEP:�[0m creating a file in subpath �[38;5;243m01/24/23 21:18:40.356�[0m Jan 24 21:18:40.458: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3424 PodName:var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 21:18:40.459: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 21:18:40.460: INFO: ExecWithOptions: Clientset creation Jan 24 21:18:40.460: INFO: ExecWithOptions: execute(POST https://capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-3424/pods/var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP:�[0m test for file in mounted path �[38;5;243m01/24/23 21:18:41.183�[0m Jan 24 21:18:41.286: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3424 PodName:var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 21:18:41.286: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 21:18:41.287: INFO: ExecWithOptions: Clientset creation Jan 24 21:18:41.287: INFO: ExecWithOptions: execute(POST https://capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-3424/pods/var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP:�[0m updating the annotation value �[38;5;243m01/24/23 21:18:42.003�[0m Jan 24 21:18:42.722: INFO: Successfully updated pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" �[1mSTEP:�[0m waiting for annotated pod running �[38;5;243m01/24/23 21:18:42.722�[0m Jan 24 21:18:42.722: INFO: Waiting up to 2m0s for pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" in namespace "var-expansion-3424" to be "running" Jan 24 21:18:42.825: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Running", Reason="", readiness=true. Elapsed: 102.685494ms Jan 24 21:18:42.825: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" satisfied condition "running" �[1mSTEP:�[0m deleting the pod gracefully �[38;5;243m01/24/23 21:18:42.825�[0m Jan 24 21:18:42.825: INFO: Deleting pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" in namespace "var-expansion-3424" Jan 24 21:18:42.941: INFO: Wait up to 5m0s for pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 24 21:18:47.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-3424" for this suite. �[38;5;243m01/24/23 21:18:47.254�[0m {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","completed":36,"skipped":2779,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [23.947 seconds]�[0m [sig-node] Variable Expansion �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should succeed in writing subpaths in container [Slow] [Conformance] �[38;5;243mtest/e2e/common/node/expansion.go:296�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:18:23.417�[0m Jan 24 21:18:23.417: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/24/23 21:18:23.418�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:18:23.63�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:18:23.833�[0m [It] should succeed in writing subpaths in container [Slow] [Conformance] test/e2e/common/node/expansion.go:296 �[1mSTEP:�[0m creating the pod �[38;5;243m01/24/23 21:18:24.037�[0m �[1mSTEP:�[0m waiting for pod running �[38;5;243m01/24/23 21:18:24.15�[0m Jan 24 21:18:24.150: INFO: Waiting up to 2m0s for pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" in namespace "var-expansion-3424" to be "running" Jan 24 21:18:24.252: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.338957ms Jan 24 21:18:26.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206573058s Jan 24 21:18:28.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205951738s Jan 24 21:18:30.357: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207360482s Jan 24 21:18:32.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20660787s Jan 24 21:18:34.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206155124s Jan 24 21:18:36.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.205751022s Jan 24 21:18:38.358: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.208216949s Jan 24 21:18:40.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Running", Reason="", readiness=true. Elapsed: 16.206084217s Jan 24 21:18:40.356: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" satisfied condition "running" �[1mSTEP:�[0m creating a file in subpath �[38;5;243m01/24/23 21:18:40.356�[0m Jan 24 21:18:40.458: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3424 PodName:var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 21:18:40.459: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 21:18:40.460: INFO: ExecWithOptions: Clientset creation Jan 24 21:18:40.460: INFO: ExecWithOptions: execute(POST https://capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-3424/pods/var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP:�[0m test for file in mounted path �[38;5;243m01/24/23 21:18:41.183�[0m Jan 24 21:18:41.286: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3424 PodName:var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 21:18:41.286: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 21:18:41.287: INFO: ExecWithOptions: Clientset creation Jan 24 21:18:41.287: INFO: ExecWithOptions: execute(POST https://capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-3424/pods/var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP:�[0m updating the annotation value �[38;5;243m01/24/23 21:18:42.003�[0m Jan 24 21:18:42.722: INFO: Successfully updated pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" �[1mSTEP:�[0m waiting for annotated pod running �[38;5;243m01/24/23 21:18:42.722�[0m Jan 24 21:18:42.722: INFO: Waiting up to 2m0s for pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" in namespace "var-expansion-3424" to be "running" Jan 24 21:18:42.825: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d": Phase="Running", Reason="", readiness=true. Elapsed: 102.685494ms Jan 24 21:18:42.825: INFO: Pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" satisfied condition "running" �[1mSTEP:�[0m deleting the pod gracefully �[38;5;243m01/24/23 21:18:42.825�[0m Jan 24 21:18:42.825: INFO: Deleting pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" in namespace "var-expansion-3424" Jan 24 21:18:42.941: INFO: Wait up to 5m0s for pod "var-expansion-87820dc4-bdce-4a42-8f15-84eea296063d" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 24 21:18:47.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-3424" for this suite. �[38;5;243m01/24/23 21:18:47.254�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:650�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:18:47.368�[0m Jan 24 21:18:47.369: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/24/23 21:18:47.37�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:18:47.683�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:18:47.886�[0m [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] test/e2e/apimachinery/garbage_collector.go:650 �[1mSTEP:�[0m create the rc �[38;5;243m01/24/23 21:18:48.196�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m01/24/23 21:18:53.417�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m01/24/23 21:18:53.522�[0m Jan 24 21:18:54.759: INFO: 80 pods remaining Jan 24 21:18:54.759: INFO: 80 pods has nil DeletionTimestamp Jan 24 21:18:54.759: INFO: Jan 24 21:18:55.763: INFO: 63 pods remaining Jan 24 21:18:55.763: INFO: 63 pods has nil DeletionTimestamp Jan 24 21:18:55.763: INFO: Jan 24 21:18:56.751: INFO: 60 pods remaining Jan 24 21:18:56.751: INFO: 60 pods has nil DeletionTimestamp Jan 24 21:18:56.751: INFO: Jan 24 21:18:57.743: INFO: 40 pods remaining Jan 24 21:18:57.743: INFO: 40 pods has nil DeletionTimestamp Jan 24 21:18:57.743: INFO: Jan 24 21:18:58.755: INFO: 23 pods remaining Jan 24 21:18:58.755: INFO: 23 pods has nil DeletionTimestamp Jan 24 21:18:58.755: INFO: Jan 24 21:18:59.737: INFO: 20 pods remaining Jan 24 21:18:59.737: INFO: 20 pods has nil DeletionTimestamp Jan 24 21:18:59.737: INFO: �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/24/23 21:19:00.73�[0m Jan 24 21:19:01.049: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5" in namespace "kube-system" to be "running and ready" Jan 24 21:19:01.152: INFO: Pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5": Phase="Running", Reason="", readiness=true. Elapsed: 103.0502ms Jan 24 21:19:01.152: INFO: The phase of Pod kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5 is Running (Ready = true) Jan 24 21:19:01.152: INFO: Pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5" satisfied condition "running and ready" Jan 24 21:19:02.015: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 24 21:19:02.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-618" for this suite. �[38;5;243m01/24/23 21:19:02.122�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","completed":37,"skipped":2861,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [14.861 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:650�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:18:47.368�[0m Jan 24 21:18:47.369: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/24/23 21:18:47.37�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:18:47.683�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:18:47.886�[0m [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] test/e2e/apimachinery/garbage_collector.go:650 �[1mSTEP:�[0m create the rc �[38;5;243m01/24/23 21:18:48.196�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m01/24/23 21:18:53.417�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m01/24/23 21:18:53.522�[0m Jan 24 21:18:54.759: INFO: 80 pods remaining Jan 24 21:18:54.759: INFO: 80 pods has nil DeletionTimestamp Jan 24 21:18:54.759: INFO: Jan 24 21:18:55.763: INFO: 63 pods remaining Jan 24 21:18:55.763: INFO: 63 pods has nil DeletionTimestamp Jan 24 21:18:55.763: INFO: Jan 24 21:18:56.751: INFO: 60 pods remaining Jan 24 21:18:56.751: INFO: 60 pods has nil DeletionTimestamp Jan 24 21:18:56.751: INFO: Jan 24 21:18:57.743: INFO: 40 pods remaining Jan 24 21:18:57.743: INFO: 40 pods has nil DeletionTimestamp Jan 24 21:18:57.743: INFO: Jan 24 21:18:58.755: INFO: 23 pods remaining Jan 24 21:18:58.755: INFO: 23 pods has nil DeletionTimestamp Jan 24 21:18:58.755: INFO: Jan 24 21:18:59.737: INFO: 20 pods remaining Jan 24 21:18:59.737: INFO: 20 pods has nil DeletionTimestamp Jan 24 21:18:59.737: INFO: �[1mSTEP:�[0m Gathering metrics �[38;5;243m01/24/23 21:19:00.73�[0m Jan 24 21:19:01.049: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5" in namespace "kube-system" to be "running and ready" Jan 24 21:19:01.152: INFO: Pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5": Phase="Running", Reason="", readiness=true. Elapsed: 103.0502ms Jan 24 21:19:01.152: INFO: The phase of Pod kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5 is Running (Ready = true) Jan 24 21:19:01.152: INFO: Pod "kube-controller-manager-capz-conf-a7mu8n-control-plane-46cr5" satisfied condition "running and ready" Jan 24 21:19:02.015: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 24 21:19:02.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-618" for this suite. �[38;5;243m01/24/23 21:19:02.122�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/common/node/expansion.go:224�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:19:02.236�[0m Jan 24 21:19:02.236: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/24/23 21:19:02.238�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:19:02.55�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:19:02.753�[0m [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:224 �[1mSTEP:�[0m creating the pod with failed condition �[38;5;243m01/24/23 21:19:02.962�[0m Jan 24 21:19:03.071: INFO: Waiting up to 2m0s for pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" in namespace "var-expansion-2187" to be "running" Jan 24 21:19:03.174: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.503865ms Jan 24 21:19:05.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206195993s Jan 24 21:19:07.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205999406s Jan 24 21:19:09.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206257204s Jan 24 21:19:11.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20627476s Jan 24 21:19:13.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207543054s Jan 24 21:19:15.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.205881129s Jan 24 21:19:17.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.205741876s Jan 24 21:19:19.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.205495065s Jan 24 21:19:21.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.207709018s Jan 24 21:19:23.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.205643931s Jan 24 21:19:25.286: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.214628804s Jan 24 21:19:27.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.205731361s Jan 24 21:19:29.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.204853499s Jan 24 21:19:31.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.206515115s Jan 24 21:19:33.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.204883049s Jan 24 21:19:35.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.205476215s Jan 24 21:19:37.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.20540953s Jan 24 21:19:39.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.204908698s Jan 24 21:19:41.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.206315904s Jan 24 21:19:43.282: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.21086942s Jan 24 21:19:45.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.205566651s Jan 24 21:19:47.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.205981542s Jan 24 21:19:49.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 46.206217419s Jan 24 21:19:51.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 48.205781309s Jan 24 21:19:53.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 50.206039837s Jan 24 21:19:55.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 52.207353945s Jan 24 21:19:57.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 54.206932337s Jan 24 21:19:59.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.205142918s Jan 24 21:20:01.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 58.206093621s Jan 24 21:20:03.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.206450691s Jan 24 21:20:05.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.206823981s Jan 24 21:20:07.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.205904787s Jan 24 21:20:09.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.206301301s Jan 24 21:20:11.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.206719622s Jan 24 21:20:13.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.20573403s Jan 24 21:20:15.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.205112549s Jan 24 21:20:17.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.205906563s Jan 24 21:20:19.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.205667165s Jan 24 21:20:21.281: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.209439821s Jan 24 21:20:23.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.204669148s Jan 24 21:20:25.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.206802038s Jan 24 21:20:27.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.207162961s Jan 24 21:20:29.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.205736904s Jan 24 21:20:31.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.206001026s Jan 24 21:20:33.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.204758881s Jan 24 21:20:35.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.205881259s Jan 24 21:20:37.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.205962755s Jan 24 21:20:39.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.205529518s Jan 24 21:20:41.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.205829478s Jan 24 21:20:43.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.206127805s Jan 24 21:20:45.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.205164083s Jan 24 21:20:47.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.207950159s Jan 24 21:20:49.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.207480279s Jan 24 21:20:51.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.206184109s Jan 24 21:20:53.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.204598568s Jan 24 21:20:55.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.206072877s Jan 24 21:20:57.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.206826488s Jan 24 21:20:59.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.205516611s Jan 24 21:21:01.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.205342129s Jan 24 21:21:03.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.206770948s Jan 24 21:21:03.381: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.309336066s �[1mSTEP:�[0m updating the pod �[38;5;243m01/24/23 21:21:03.381�[0m Jan 24 21:21:04.105: INFO: Successfully updated pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" �[1mSTEP:�[0m waiting for pod running �[38;5;243m01/24/23 21:21:04.106�[0m Jan 24 21:21:04.106: INFO: Waiting up to 2m0s for pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" in namespace "var-expansion-2187" to be "running" Jan 24 21:21:04.209: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.642216ms Jan 24 21:21:06.313: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206463582s Jan 24 21:21:08.313: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206973662s Jan 24 21:21:10.312: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206276368s Jan 24 21:21:12.312: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206093386s Jan 24 21:21:14.313: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206754616s Jan 24 21:21:16.311: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.205190339s Jan 24 21:21:18.313: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.206478053s Jan 24 21:21:20.314: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Running", Reason="", readiness=true. Elapsed: 16.207919461s Jan 24 21:21:20.314: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" satisfied condition "running" �[1mSTEP:�[0m deleting the pod gracefully �[38;5;243m01/24/23 21:21:20.314�[0m Jan 24 21:21:20.315: INFO: Deleting pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" in namespace "var-expansion-2187" Jan 24 21:21:20.429: INFO: Wait up to 5m0s for pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 24 21:21:24.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-2187" for this suite. �[38;5;243m01/24/23 21:21:24.742�[0m {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","completed":38,"skipped":2947,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [142.613 seconds]�[0m [sig-node] Variable Expansion �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[38;5;243mtest/e2e/common/node/expansion.go:224�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:19:02.236�[0m Jan 24 21:19:02.236: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m01/24/23 21:19:02.238�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:19:02.55�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:19:02.753�[0m [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:224 �[1mSTEP:�[0m creating the pod with failed condition �[38;5;243m01/24/23 21:19:02.962�[0m Jan 24 21:19:03.071: INFO: Waiting up to 2m0s for pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" in namespace "var-expansion-2187" to be "running" Jan 24 21:19:03.174: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.503865ms Jan 24 21:19:05.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206195993s Jan 24 21:19:07.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205999406s Jan 24 21:19:09.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206257204s Jan 24 21:19:11.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20627476s Jan 24 21:19:13.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207543054s Jan 24 21:19:15.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.205881129s Jan 24 21:19:17.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.205741876s Jan 24 21:19:19.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.205495065s Jan 24 21:19:21.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.207709018s Jan 24 21:19:23.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.205643931s Jan 24 21:19:25.286: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.214628804s Jan 24 21:19:27.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.205731361s Jan 24 21:19:29.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.204853499s Jan 24 21:19:31.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.206515115s Jan 24 21:19:33.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.204883049s Jan 24 21:19:35.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.205476215s Jan 24 21:19:37.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.20540953s Jan 24 21:19:39.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.204908698s Jan 24 21:19:41.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.206315904s Jan 24 21:19:43.282: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.21086942s Jan 24 21:19:45.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.205566651s Jan 24 21:19:47.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.205981542s Jan 24 21:19:49.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 46.206217419s Jan 24 21:19:51.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 48.205781309s Jan 24 21:19:53.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 50.206039837s Jan 24 21:19:55.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 52.207353945s Jan 24 21:19:57.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 54.206932337s Jan 24 21:19:59.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.205142918s Jan 24 21:20:01.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 58.206093621s Jan 24 21:20:03.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.206450691s Jan 24 21:20:05.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.206823981s Jan 24 21:20:07.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.205904787s Jan 24 21:20:09.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.206301301s Jan 24 21:20:11.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.206719622s Jan 24 21:20:13.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.20573403s Jan 24 21:20:15.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.205112549s Jan 24 21:20:17.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.205906563s Jan 24 21:20:19.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.205667165s Jan 24 21:20:21.281: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.209439821s Jan 24 21:20:23.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.204669148s Jan 24 21:20:25.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.206802038s Jan 24 21:20:27.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.207162961s Jan 24 21:20:29.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.205736904s Jan 24 21:20:31.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.206001026s Jan 24 21:20:33.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.204758881s Jan 24 21:20:35.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.205881259s Jan 24 21:20:37.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.205962755s Jan 24 21:20:39.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.205529518s Jan 24 21:20:41.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.205829478s Jan 24 21:20:43.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.206127805s Jan 24 21:20:45.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.205164083s Jan 24 21:20:47.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.207950159s Jan 24 21:20:49.279: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.207480279s Jan 24 21:20:51.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.206184109s Jan 24 21:20:53.276: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.204598568s Jan 24 21:20:55.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.206072877s Jan 24 21:20:57.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.206826488s Jan 24 21:20:59.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.205516611s Jan 24 21:21:01.277: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.205342129s Jan 24 21:21:03.278: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.206770948s Jan 24 21:21:03.381: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.309336066s �[1mSTEP:�[0m updating the pod �[38;5;243m01/24/23 21:21:03.381�[0m Jan 24 21:21:04.105: INFO: Successfully updated pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" �[1mSTEP:�[0m waiting for pod running �[38;5;243m01/24/23 21:21:04.106�[0m Jan 24 21:21:04.106: INFO: Waiting up to 2m0s for pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" in namespace "var-expansion-2187" to be "running" Jan 24 21:21:04.209: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.642216ms Jan 24 21:21:06.313: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206463582s Jan 24 21:21:08.313: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206973662s Jan 24 21:21:10.312: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206276368s Jan 24 21:21:12.312: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206093386s Jan 24 21:21:14.313: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206754616s Jan 24 21:21:16.311: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.205190339s Jan 24 21:21:18.313: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.206478053s Jan 24 21:21:20.314: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d": Phase="Running", Reason="", readiness=true. Elapsed: 16.207919461s Jan 24 21:21:20.314: INFO: Pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" satisfied condition "running" �[1mSTEP:�[0m deleting the pod gracefully �[38;5;243m01/24/23 21:21:20.314�[0m Jan 24 21:21:20.315: INFO: Deleting pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" in namespace "var-expansion-2187" Jan 24 21:21:20.429: INFO: Wait up to 5m0s for pod "var-expansion-57d739a8-3cf5-459a-a914-e446b82bbd2d" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 Jan 24 21:21:24.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "var-expansion-2187" for this suite. �[38;5;243m01/24/23 21:21:24.742�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicaSet�[0m �[1mShould scale from 1 pod to 3 pods and from 3 to 5�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:50�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:21:24.856�[0m Jan 24 21:21:24.856: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:21:24.857�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:21:25.168�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:21:25.371�[0m [It] Should scale from 1 pod to 3 pods and from 3 to 5 test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m01/24/23 21:21:25.575�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-9953 �[38;5;243m01/24/23 21:21:25.69�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-9953 �[38;5;243m01/24/23 21:21:25.69�[0m I0124 21:21:25.798856 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-9953, replica count: 1 I0124 21:21:35.950318 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:21:35.95�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-9953 �[38;5;243m01/24/23 21:21:36.073�[0m I0124 21:21:36.180766 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-9953, replica count: 1 I0124 21:21:46.332291 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:21:51.332: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 24 21:21:51.435: INFO: RC rs: consume 250 millicores in total Jan 24 21:21:51.435: INFO: RC rs: setting consumption to 250 millicores in total Jan 24 21:21:51.435: INFO: RC rs: consume 0 MB in total Jan 24 21:21:51.435: INFO: RC rs: disabling mem consumption Jan 24 21:21:51.435: INFO: RC rs: sending request to consume 250 millicores Jan 24 21:21:51.435: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9953/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 24 21:21:51.435: INFO: RC rs: consume custom metric 0 in total Jan 24 21:21:51.435: INFO: RC rs: disabling consumption of custom metric QPS Jan 24 21:21:51.644: INFO: waiting for 3 replicas (current: 1) Jan 24 21:22:11.747: INFO: waiting for 3 replicas (current: 1) Jan 24 21:22:21.638: INFO: RC rs: sending request to consume 250 millicores Jan 24 21:22:21.638: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9953/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 24 21:22:31.747: INFO: waiting for 3 replicas (current: 3) Jan 24 21:22:31.747: INFO: RC rs: consume 700 millicores in total Jan 24 21:22:31.748: INFO: RC rs: setting consumption to 700 millicores in total Jan 24 21:22:31.850: INFO: waiting for 5 replicas (current: 3) Jan 24 21:22:51.753: INFO: RC rs: sending request to consume 700 millicores Jan 24 21:22:51.753: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9953/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 24 21:22:51.953: INFO: waiting for 5 replicas (current: 3) Jan 24 21:23:11.954: INFO: waiting for 5 replicas (current: 4) Jan 24 21:23:24.886: INFO: RC rs: sending request to consume 700 millicores Jan 24 21:23:24.886: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9953/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 24 21:23:31.953: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m01/24/23 21:23:32.061�[0m Jan 24 21:23:32.061: INFO: RC rs: stopping metric consumer Jan 24 21:23:32.061: INFO: RC rs: stopping CPU consumer Jan 24 21:23:32.061: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-9953, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:23:42.061�[0m Jan 24 21:23:42.623: INFO: Deleting ReplicaSet.apps rs took: 106.859834ms Jan 24 21:23:42.724: INFO: Terminating ReplicaSet.apps rs pods took: 100.96819ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-9953, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:23:45.867�[0m Jan 24 21:23:46.230: INFO: Deleting ReplicationController rs-ctrl took: 108.636681ms Jan 24 21:23:46.331: INFO: Terminating ReplicationController rs-ctrl pods took: 100.96022ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 21:23:48.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-9953" for this suite. �[38;5;243m01/24/23 21:23:48.998�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5","completed":39,"skipped":3019,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [144.248 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:48�[0m Should scale from 1 pod to 3 pods and from 3 to 5 �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:50�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:21:24.856�[0m Jan 24 21:21:24.856: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:21:24.857�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:21:25.168�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:21:25.371�[0m [It] Should scale from 1 pod to 3 pods and from 3 to 5 test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m01/24/23 21:21:25.575�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-9953 �[38;5;243m01/24/23 21:21:25.69�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-9953 �[38;5;243m01/24/23 21:21:25.69�[0m I0124 21:21:25.798856 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-9953, replica count: 1 I0124 21:21:35.950318 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:21:35.95�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-9953 �[38;5;243m01/24/23 21:21:36.073�[0m I0124 21:21:36.180766 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-9953, replica count: 1 I0124 21:21:46.332291 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:21:51.332: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 24 21:21:51.435: INFO: RC rs: consume 250 millicores in total Jan 24 21:21:51.435: INFO: RC rs: setting consumption to 250 millicores in total Jan 24 21:21:51.435: INFO: RC rs: consume 0 MB in total Jan 24 21:21:51.435: INFO: RC rs: disabling mem consumption Jan 24 21:21:51.435: INFO: RC rs: sending request to consume 250 millicores Jan 24 21:21:51.435: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9953/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 24 21:21:51.435: INFO: RC rs: consume custom metric 0 in total Jan 24 21:21:51.435: INFO: RC rs: disabling consumption of custom metric QPS Jan 24 21:21:51.644: INFO: waiting for 3 replicas (current: 1) Jan 24 21:22:11.747: INFO: waiting for 3 replicas (current: 1) Jan 24 21:22:21.638: INFO: RC rs: sending request to consume 250 millicores Jan 24 21:22:21.638: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9953/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 24 21:22:31.747: INFO: waiting for 3 replicas (current: 3) Jan 24 21:22:31.747: INFO: RC rs: consume 700 millicores in total Jan 24 21:22:31.748: INFO: RC rs: setting consumption to 700 millicores in total Jan 24 21:22:31.850: INFO: waiting for 5 replicas (current: 3) Jan 24 21:22:51.753: INFO: RC rs: sending request to consume 700 millicores Jan 24 21:22:51.753: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9953/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 24 21:22:51.953: INFO: waiting for 5 replicas (current: 3) Jan 24 21:23:11.954: INFO: waiting for 5 replicas (current: 4) Jan 24 21:23:24.886: INFO: RC rs: sending request to consume 700 millicores Jan 24 21:23:24.886: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9953/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 24 21:23:31.953: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m01/24/23 21:23:32.061�[0m Jan 24 21:23:32.061: INFO: RC rs: stopping metric consumer Jan 24 21:23:32.061: INFO: RC rs: stopping CPU consumer Jan 24 21:23:32.061: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-9953, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:23:42.061�[0m Jan 24 21:23:42.623: INFO: Deleting ReplicaSet.apps rs took: 106.859834ms Jan 24 21:23:42.724: INFO: Terminating ReplicaSet.apps rs pods took: 100.96819ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-9953, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:23:45.867�[0m Jan 24 21:23:46.230: INFO: Deleting ReplicationController rs-ctrl took: 108.636681ms Jan 24 21:23:46.331: INFO: Terminating ReplicationController rs-ctrl pods took: 100.96022ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 21:23:48.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-9953" for this suite. �[38;5;243m01/24/23 21:23:48.998�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case)�[0m �[1mShould scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:98�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:23:49.11�[0m Jan 24 21:23:49.111: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:23:49.112�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:23:49.424�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:23:49.627�[0m [It] Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container test/e2e/autoscaling/horizontal_pod_autoscaling.go:98 �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m01/24/23 21:23:49.829�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-5948 �[38;5;243m01/24/23 21:23:49.947�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-5948 �[38;5;243m01/24/23 21:23:49.948�[0m I0124 21:23:50.063826 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-5948, replica count: 1 I0124 21:24:00.217109 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:24:00.217�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-5948 �[38;5;243m01/24/23 21:24:00.337�[0m I0124 21:24:00.444434 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-5948, replica count: 1 I0124 21:24:10.595915 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:24:15.598: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 24 21:24:15.701: INFO: RC rs: consume 125 millicores in total Jan 24 21:24:15.701: INFO: RC rs: setting consumption to 125 millicores in total Jan 24 21:24:15.701: INFO: RC rs: consume 0 MB in total Jan 24 21:24:15.701: INFO: RC rs: disabling mem consumption Jan 24 21:24:15.701: INFO: RC rs: consume custom metric 0 in total Jan 24 21:24:15.701: INFO: RC rs: disabling consumption of custom metric QPS Jan 24 21:24:15.909: INFO: waiting for 3 replicas (current: 1) Jan 24 21:24:36.013: INFO: waiting for 3 replicas (current: 1) Jan 24 21:24:45.705: INFO: RC rs: sending request to consume 125 millicores Jan 24 21:24:45.705: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5948/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=125&requestSizeMillicores=100 } Jan 24 21:24:56.012: INFO: waiting for 3 replicas (current: 1) Jan 24 21:25:15.854: INFO: RC rs: sending request to consume 125 millicores Jan 24 21:25:15.854: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5948/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=125&requestSizeMillicores=100 } Jan 24 21:25:16.012: INFO: waiting for 3 replicas (current: 3) Jan 24 21:25:16.012: INFO: RC rs: consume 500 millicores in total Jan 24 21:25:16.013: INFO: RC rs: setting consumption to 500 millicores in total Jan 24 21:25:16.115: INFO: waiting for 5 replicas (current: 3) Jan 24 21:25:36.219: INFO: waiting for 5 replicas (current: 3) Jan 24 21:25:45.974: INFO: RC rs: sending request to consume 500 millicores Jan 24 21:25:45.975: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5948/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=500&requestSizeMillicores=100 } Jan 24 21:25:56.219: INFO: waiting for 5 replicas (current: 3) Jan 24 21:26:16.092: INFO: RC rs: sending request to consume 500 millicores Jan 24 21:26:16.093: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5948/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=500&requestSizeMillicores=100 } Jan 24 21:26:16.218: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m01/24/23 21:26:16.326�[0m Jan 24 21:26:16.326: INFO: RC rs: stopping metric consumer Jan 24 21:26:16.326: INFO: RC rs: stopping mem consumer Jan 24 21:26:16.510: INFO: RC rs: stopping CPU consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-5948, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:26:26.51�[0m Jan 24 21:26:27.174: INFO: Deleting ReplicaSet.apps rs took: 108.417431ms Jan 24 21:26:27.274: INFO: Terminating ReplicaSet.apps rs pods took: 100.271607ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-5948, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:26:30.9�[0m Jan 24 21:26:31.266: INFO: Deleting ReplicationController rs-ctrl took: 113.568201ms Jan 24 21:26:31.367: INFO: Terminating ReplicationController rs-ctrl pods took: 100.363467ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 21:26:33.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-5948" for this suite. �[38;5;243m01/24/23 21:26:33.894�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container","completed":40,"skipped":3131,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [164.892 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:96�[0m Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:98�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:23:49.11�[0m Jan 24 21:23:49.111: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:23:49.112�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:23:49.424�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:23:49.627�[0m [It] Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container test/e2e/autoscaling/horizontal_pod_autoscaling.go:98 �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m01/24/23 21:23:49.829�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-5948 �[38;5;243m01/24/23 21:23:49.947�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-5948 �[38;5;243m01/24/23 21:23:49.948�[0m I0124 21:23:50.063826 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-5948, replica count: 1 I0124 21:24:00.217109 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:24:00.217�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-5948 �[38;5;243m01/24/23 21:24:00.337�[0m I0124 21:24:00.444434 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-5948, replica count: 1 I0124 21:24:10.595915 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:24:15.598: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 24 21:24:15.701: INFO: RC rs: consume 125 millicores in total Jan 24 21:24:15.701: INFO: RC rs: setting consumption to 125 millicores in total Jan 24 21:24:15.701: INFO: RC rs: consume 0 MB in total Jan 24 21:24:15.701: INFO: RC rs: disabling mem consumption Jan 24 21:24:15.701: INFO: RC rs: consume custom metric 0 in total Jan 24 21:24:15.701: INFO: RC rs: disabling consumption of custom metric QPS Jan 24 21:24:15.909: INFO: waiting for 3 replicas (current: 1) Jan 24 21:24:36.013: INFO: waiting for 3 replicas (current: 1) Jan 24 21:24:45.705: INFO: RC rs: sending request to consume 125 millicores Jan 24 21:24:45.705: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5948/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=125&requestSizeMillicores=100 } Jan 24 21:24:56.012: INFO: waiting for 3 replicas (current: 1) Jan 24 21:25:15.854: INFO: RC rs: sending request to consume 125 millicores Jan 24 21:25:15.854: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5948/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=125&requestSizeMillicores=100 } Jan 24 21:25:16.012: INFO: waiting for 3 replicas (current: 3) Jan 24 21:25:16.012: INFO: RC rs: consume 500 millicores in total Jan 24 21:25:16.013: INFO: RC rs: setting consumption to 500 millicores in total Jan 24 21:25:16.115: INFO: waiting for 5 replicas (current: 3) Jan 24 21:25:36.219: INFO: waiting for 5 replicas (current: 3) Jan 24 21:25:45.974: INFO: RC rs: sending request to consume 500 millicores Jan 24 21:25:45.975: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5948/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=500&requestSizeMillicores=100 } Jan 24 21:25:56.219: INFO: waiting for 5 replicas (current: 3) Jan 24 21:26:16.092: INFO: RC rs: sending request to consume 500 millicores Jan 24 21:26:16.093: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5948/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=500&requestSizeMillicores=100 } Jan 24 21:26:16.218: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m01/24/23 21:26:16.326�[0m Jan 24 21:26:16.326: INFO: RC rs: stopping metric consumer Jan 24 21:26:16.326: INFO: RC rs: stopping mem consumer Jan 24 21:26:16.510: INFO: RC rs: stopping CPU consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-5948, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:26:26.51�[0m Jan 24 21:26:27.174: INFO: Deleting ReplicaSet.apps rs took: 108.417431ms Jan 24 21:26:27.274: INFO: Terminating ReplicaSet.apps rs pods took: 100.271607ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-5948, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:26:30.9�[0m Jan 24 21:26:31.266: INFO: Deleting ReplicationController rs-ctrl took: 113.568201ms Jan 24 21:26:31.367: INFO: Terminating ReplicationController rs-ctrl pods took: 100.363467ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 Jan 24 21:26:33.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-5948" for this suite. �[38;5;243m01/24/23 21:26:33.894�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith scale limited by number of Pods rate�[0m �[1mshould scale down no more than given number of Pods per minute�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:253�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:26:34.016�[0m Jan 24 21:26:34.016: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:26:34.017�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:26:34.332�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:26:34.535�[0m [It] should scale down no more than given number of Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:253 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/24/23 21:26:34.738�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 6 replicas �[38;5;243m01/24/23 21:26:34.738�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-2435 �[38;5;243m01/24/23 21:26:34.857�[0m I0124 21:26:34.963887 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-2435, replica count: 6 I0124 21:26:45.114479 14 runners.go:193] consumer Pods: 6 out of 6 created, 6 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:26:45.114�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-2435 �[38;5;243m01/24/23 21:26:45.241�[0m I0124 21:26:45.350202 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-2435, replica count: 1 I0124 21:26:55.505830 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:27:00.506: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 24 21:27:00.609: INFO: RC consumer: consume 660 millicores in total Jan 24 21:27:00.609: INFO: RC consumer: setting consumption to 660 millicores in total Jan 24 21:27:00.609: INFO: RC consumer: sending request to consume 660 millicores Jan 24 21:27:00.609: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2435/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=660&requestSizeMillicores=100 } Jan 24 21:27:00.609: INFO: RC consumer: consume 0 MB in total Jan 24 21:27:00.609: INFO: RC consumer: disabling mem consumption Jan 24 21:27:00.609: INFO: RC consumer: consume custom metric 0 in total Jan 24 21:27:00.609: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m01/24/23 21:27:00.718�[0m Jan 24 21:27:00.718: INFO: RC consumer: consume 110 millicores in total Jan 24 21:27:00.813: INFO: RC consumer: setting consumption to 110 millicores in total Jan 24 21:27:00.916: INFO: waiting for 4 replicas (current: 6) Jan 24 21:27:21.019: INFO: waiting for 4 replicas (current: 5) Jan 24 21:27:30.813: INFO: RC consumer: sending request to consume 110 millicores Jan 24 21:27:30.813: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2435/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 21:27:41.019: INFO: waiting for 4 replicas (current: 5) Jan 24 21:28:00.926: INFO: RC consumer: sending request to consume 110 millicores Jan 24 21:28:00.926: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2435/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 21:28:01.019: INFO: waiting for 4 replicas (current: 4) Jan 24 21:28:01.122: INFO: waiting for 2 replicas (current: 4) Jan 24 21:28:21.225: INFO: waiting for 2 replicas (current: 3) Jan 24 21:28:31.040: INFO: RC consumer: sending request to consume 110 millicores Jan 24 21:28:31.040: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2435/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 21:28:41.227: INFO: waiting for 2 replicas (current: 3) Jan 24 21:29:01.154: INFO: RC consumer: sending request to consume 110 millicores Jan 24 21:29:01.154: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2435/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 21:29:01.230: INFO: waiting for 2 replicas (current: 2) �[1mSTEP:�[0m verifying time waited for a scale down to 4 replicas �[38;5;243m01/24/23 21:29:01.23�[0m �[1mSTEP:�[0m verifying time waited for a scale down to 2 replicas �[38;5;243m01/24/23 21:29:01.231�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/24/23 21:29:01.338�[0m Jan 24 21:29:01.339: INFO: RC consumer: stopping metric consumer Jan 24 21:29:01.339: INFO: RC consumer: stopping CPU consumer Jan 24 21:29:01.339: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-2435, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:29:11.339�[0m Jan 24 21:29:11.706: INFO: Deleting Deployment.apps consumer took: 108.19887ms Jan 24 21:29:11.806: INFO: Terminating Deployment.apps consumer pods took: 100.228445ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-2435, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:29:14.444�[0m Jan 24 21:29:14.805: INFO: Deleting ReplicationController consumer-ctrl took: 107.442116ms Jan 24 21:29:14.905: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.475725ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 24 21:29:16.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2435" for this suite. �[38;5;243m01/24/23 21:29:16.848�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute","completed":41,"skipped":3284,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [162.940 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with scale limited by number of Pods rate �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:210�[0m should scale down no more than given number of Pods per minute �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:253�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:26:34.016�[0m Jan 24 21:26:34.016: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:26:34.017�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:26:34.332�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:26:34.535�[0m [It] should scale down no more than given number of Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:253 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/24/23 21:26:34.738�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 6 replicas �[38;5;243m01/24/23 21:26:34.738�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-2435 �[38;5;243m01/24/23 21:26:34.857�[0m I0124 21:26:34.963887 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-2435, replica count: 6 I0124 21:26:45.114479 14 runners.go:193] consumer Pods: 6 out of 6 created, 6 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:26:45.114�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-2435 �[38;5;243m01/24/23 21:26:45.241�[0m I0124 21:26:45.350202 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-2435, replica count: 1 I0124 21:26:55.505830 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:27:00.506: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 24 21:27:00.609: INFO: RC consumer: consume 660 millicores in total Jan 24 21:27:00.609: INFO: RC consumer: setting consumption to 660 millicores in total Jan 24 21:27:00.609: INFO: RC consumer: sending request to consume 660 millicores Jan 24 21:27:00.609: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2435/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=660&requestSizeMillicores=100 } Jan 24 21:27:00.609: INFO: RC consumer: consume 0 MB in total Jan 24 21:27:00.609: INFO: RC consumer: disabling mem consumption Jan 24 21:27:00.609: INFO: RC consumer: consume custom metric 0 in total Jan 24 21:27:00.609: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m01/24/23 21:27:00.718�[0m Jan 24 21:27:00.718: INFO: RC consumer: consume 110 millicores in total Jan 24 21:27:00.813: INFO: RC consumer: setting consumption to 110 millicores in total Jan 24 21:27:00.916: INFO: waiting for 4 replicas (current: 6) Jan 24 21:27:21.019: INFO: waiting for 4 replicas (current: 5) Jan 24 21:27:30.813: INFO: RC consumer: sending request to consume 110 millicores Jan 24 21:27:30.813: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2435/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 21:27:41.019: INFO: waiting for 4 replicas (current: 5) Jan 24 21:28:00.926: INFO: RC consumer: sending request to consume 110 millicores Jan 24 21:28:00.926: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2435/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 21:28:01.019: INFO: waiting for 4 replicas (current: 4) Jan 24 21:28:01.122: INFO: waiting for 2 replicas (current: 4) Jan 24 21:28:21.225: INFO: waiting for 2 replicas (current: 3) Jan 24 21:28:31.040: INFO: RC consumer: sending request to consume 110 millicores Jan 24 21:28:31.040: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2435/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 21:28:41.227: INFO: waiting for 2 replicas (current: 3) Jan 24 21:29:01.154: INFO: RC consumer: sending request to consume 110 millicores Jan 24 21:29:01.154: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2435/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 21:29:01.230: INFO: waiting for 2 replicas (current: 2) �[1mSTEP:�[0m verifying time waited for a scale down to 4 replicas �[38;5;243m01/24/23 21:29:01.23�[0m �[1mSTEP:�[0m verifying time waited for a scale down to 2 replicas �[38;5;243m01/24/23 21:29:01.231�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/24/23 21:29:01.338�[0m Jan 24 21:29:01.339: INFO: RC consumer: stopping metric consumer Jan 24 21:29:01.339: INFO: RC consumer: stopping CPU consumer Jan 24 21:29:01.339: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-2435, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:29:11.339�[0m Jan 24 21:29:11.706: INFO: Deleting Deployment.apps consumer took: 108.19887ms Jan 24 21:29:11.806: INFO: Terminating Deployment.apps consumer pods took: 100.228445ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-2435, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:29:14.444�[0m Jan 24 21:29:14.805: INFO: Deleting ReplicationController consumer-ctrl took: 107.442116ms Jan 24 21:29:14.905: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.475725ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 24 21:29:16.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2435" for this suite. �[38;5;243m01/24/23 21:29:16.848�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith short downscale stabilization window�[0m �[1mshould scale down soon after the stabilization period�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:55�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:29:16.957�[0m Jan 24 21:29:16.957: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:29:16.959�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:29:17.27�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:29:17.473�[0m [It] should scale down soon after the stabilization period test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:55 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/24/23 21:29:17.676�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m01/24/23 21:29:17.676�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-9509 �[38;5;243m01/24/23 21:29:17.792�[0m I0124 21:29:17.901677 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-9509, replica count: 1 I0124 21:29:28.054567 14 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:29:28.054�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-9509 �[38;5;243m01/24/23 21:29:28.17�[0m I0124 21:29:28.277248 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-9509, replica count: 1 I0124 21:29:38.429560 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:29:43.431: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 24 21:29:43.534: INFO: RC consumer: consume 110 millicores in total Jan 24 21:29:43.534: INFO: RC consumer: setting consumption to 110 millicores in total Jan 24 21:29:43.534: INFO: RC consumer: sending request to consume 110 millicores Jan 24 21:29:43.534: INFO: RC consumer: consume 0 MB in total Jan 24 21:29:43.534: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 21:29:43.534: INFO: RC consumer: consume custom metric 0 in total Jan 24 21:29:43.534: INFO: RC consumer: disabling consumption of custom metric QPS Jan 24 21:29:43.534: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m triggering scale up to record a recommendation �[38;5;243m01/24/23 21:29:43.642�[0m Jan 24 21:29:43.642: INFO: RC consumer: consume 330 millicores in total Jan 24 21:29:43.735: INFO: RC consumer: setting consumption to 330 millicores in total Jan 24 21:29:43.838: INFO: waiting for 3 replicas (current: 1) Jan 24 21:30:03.945: INFO: waiting for 3 replicas (current: 1) Jan 24 21:30:13.736: INFO: RC consumer: sending request to consume 330 millicores Jan 24 21:30:13.736: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 21:30:23.944: INFO: waiting for 3 replicas (current: 1) Jan 24 21:30:43.869: INFO: RC consumer: sending request to consume 330 millicores Jan 24 21:30:43.869: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 21:30:43.941: INFO: waiting for 3 replicas (current: 3) �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m01/24/23 21:30:43.941�[0m Jan 24 21:30:43.941: INFO: RC consumer: consume 220 millicores in total Jan 24 21:30:46.988: INFO: RC consumer: setting consumption to 220 millicores in total Jan 24 21:30:47.091: INFO: waiting for 2 replicas (current: 3) Jan 24 21:31:07.194: INFO: waiting for 2 replicas (current: 3) Jan 24 21:31:16.989: INFO: RC consumer: sending request to consume 220 millicores Jan 24 21:31:16.989: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Jan 24 21:31:27.194: INFO: waiting for 2 replicas (current: 3) Jan 24 21:31:47.103: INFO: RC consumer: sending request to consume 220 millicores Jan 24 21:31:47.103: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Jan 24 21:31:47.196: INFO: waiting for 2 replicas (current: 3) Jan 24 21:32:07.194: INFO: waiting for 2 replicas (current: 3) Jan 24 21:32:17.214: INFO: RC consumer: sending request to consume 220 millicores Jan 24 21:32:17.215: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Jan 24 21:32:27.194: INFO: waiting for 2 replicas (current: 2) �[1mSTEP:�[0m verifying time waited for a scale down �[38;5;243m01/24/23 21:32:27.194�[0m Jan 24 21:32:27.195: INFO: time waited for scale down: 1m40.206040164s �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/24/23 21:32:27.302�[0m Jan 24 21:32:27.302: INFO: RC consumer: stopping metric consumer Jan 24 21:32:27.302: INFO: RC consumer: stopping mem consumer Jan 24 21:32:27.302: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-9509, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:32:37.303�[0m Jan 24 21:32:37.662: INFO: Deleting Deployment.apps consumer took: 106.546664ms Jan 24 21:32:37.763: INFO: Terminating Deployment.apps consumer pods took: 100.969715ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-9509, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:32:40.894�[0m Jan 24 21:32:41.253: INFO: Deleting ReplicationController consumer-ctrl took: 105.330713ms Jan 24 21:32:41.353: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.2346ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 24 21:32:43.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-9509" for this suite. �[38;5;243m01/24/23 21:32:43.197�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period","completed":42,"skipped":3292,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [206.347 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with short downscale stabilization window �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:54�[0m should scale down soon after the stabilization period �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:55�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:29:16.957�[0m Jan 24 21:29:16.957: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m01/24/23 21:29:16.959�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:29:17.27�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:29:17.473�[0m [It] should scale down soon after the stabilization period test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:55 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m01/24/23 21:29:17.676�[0m �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m01/24/23 21:29:17.676�[0m �[1mSTEP:�[0m creating deployment consumer in namespace horizontal-pod-autoscaling-9509 �[38;5;243m01/24/23 21:29:17.792�[0m I0124 21:29:17.901677 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-9509, replica count: 1 I0124 21:29:28.054567 14 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m01/24/23 21:29:28.054�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-9509 �[38;5;243m01/24/23 21:29:28.17�[0m I0124 21:29:28.277248 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-9509, replica count: 1 I0124 21:29:38.429560 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 21:29:43.431: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 24 21:29:43.534: INFO: RC consumer: consume 110 millicores in total Jan 24 21:29:43.534: INFO: RC consumer: setting consumption to 110 millicores in total Jan 24 21:29:43.534: INFO: RC consumer: sending request to consume 110 millicores Jan 24 21:29:43.534: INFO: RC consumer: consume 0 MB in total Jan 24 21:29:43.534: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 24 21:29:43.534: INFO: RC consumer: consume custom metric 0 in total Jan 24 21:29:43.534: INFO: RC consumer: disabling consumption of custom metric QPS Jan 24 21:29:43.534: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m triggering scale up to record a recommendation �[38;5;243m01/24/23 21:29:43.642�[0m Jan 24 21:29:43.642: INFO: RC consumer: consume 330 millicores in total Jan 24 21:29:43.735: INFO: RC consumer: setting consumption to 330 millicores in total Jan 24 21:29:43.838: INFO: waiting for 3 replicas (current: 1) Jan 24 21:30:03.945: INFO: waiting for 3 replicas (current: 1) Jan 24 21:30:13.736: INFO: RC consumer: sending request to consume 330 millicores Jan 24 21:30:13.736: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 21:30:23.944: INFO: waiting for 3 replicas (current: 1) Jan 24 21:30:43.869: INFO: RC consumer: sending request to consume 330 millicores Jan 24 21:30:43.869: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 24 21:30:43.941: INFO: waiting for 3 replicas (current: 3) �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m01/24/23 21:30:43.941�[0m Jan 24 21:30:43.941: INFO: RC consumer: consume 220 millicores in total Jan 24 21:30:46.988: INFO: RC consumer: setting consumption to 220 millicores in total Jan 24 21:30:47.091: INFO: waiting for 2 replicas (current: 3) Jan 24 21:31:07.194: INFO: waiting for 2 replicas (current: 3) Jan 24 21:31:16.989: INFO: RC consumer: sending request to consume 220 millicores Jan 24 21:31:16.989: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Jan 24 21:31:27.194: INFO: waiting for 2 replicas (current: 3) Jan 24 21:31:47.103: INFO: RC consumer: sending request to consume 220 millicores Jan 24 21:31:47.103: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Jan 24 21:31:47.196: INFO: waiting for 2 replicas (current: 3) Jan 24 21:32:07.194: INFO: waiting for 2 replicas (current: 3) Jan 24 21:32:17.214: INFO: RC consumer: sending request to consume 220 millicores Jan 24 21:32:17.215: INFO: ConsumeCPU URL: {https capz-conf-a7mu8n-d511c417.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9509/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Jan 24 21:32:27.194: INFO: waiting for 2 replicas (current: 2) �[1mSTEP:�[0m verifying time waited for a scale down �[38;5;243m01/24/23 21:32:27.194�[0m Jan 24 21:32:27.195: INFO: time waited for scale down: 1m40.206040164s �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m01/24/23 21:32:27.302�[0m Jan 24 21:32:27.302: INFO: RC consumer: stopping metric consumer Jan 24 21:32:27.302: INFO: RC consumer: stopping mem consumer Jan 24 21:32:27.302: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-9509, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:32:37.303�[0m Jan 24 21:32:37.662: INFO: Deleting Deployment.apps consumer took: 106.546664ms Jan 24 21:32:37.763: INFO: Terminating Deployment.apps consumer pods took: 100.969715ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-9509, will wait for the garbage collector to delete the pods �[38;5;243m01/24/23 21:32:40.894�[0m Jan 24 21:32:41.253: INFO: Deleting ReplicationController consumer-ctrl took: 105.330713ms Jan 24 21:32:41.353: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.2346ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 Jan 24 21:32:43.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-9509" for this suite. �[38;5;243m01/24/23 21:32:43.197�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould support orphan deletion of custom resources�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:1040�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:32:43.316�[0m Jan 24 21:32:43.316: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/24/23 21:32:43.317�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:32:43.636�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:32:43.839�[0m [It] should support orphan deletion of custom resources test/e2e/apimachinery/garbage_collector.go:1040 Jan 24 21:32:44.042: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 21:32:46.701: INFO: created owner resource "ownerzwd99" Jan 24 21:32:46.806: INFO: created dependent resource "dependentqdj2b" �[1mSTEP:�[0m wait for the owner to be deleted �[38;5;243m01/24/23 21:32:46.912�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd �[38;5;243m01/24/23 21:32:57.018�[0m [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 24 21:33:27.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-8661" for this suite. �[38;5;243m01/24/23 21:33:27.542�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","completed":43,"skipped":3484,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [44.336 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should support orphan deletion of custom resources �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:1040�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:32:43.316�[0m Jan 24 21:32:43.316: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/24/23 21:32:43.317�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:32:43.636�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:32:43.839�[0m [It] should support orphan deletion of custom resources test/e2e/apimachinery/garbage_collector.go:1040 Jan 24 21:32:44.042: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 21:32:46.701: INFO: created owner resource "ownerzwd99" Jan 24 21:32:46.806: INFO: created dependent resource "dependentqdj2b" �[1mSTEP:�[0m wait for the owner to be deleted �[38;5;243m01/24/23 21:32:46.912�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd �[38;5;243m01/24/23 21:32:57.018�[0m [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 Jan 24 21:33:27.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "gc-8661" for this suite. �[38;5;243m01/24/23 21:33:27.542�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould ensure that all pods are removed when a namespace is deleted [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/namespace.go:242�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:33:27.661�[0m Jan 24 21:33:27.662: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/24/23 21:33:27.663�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:33:27.976�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:33:28.178�[0m [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:242 �[1mSTEP:�[0m Creating a test namespace �[38;5;243m01/24/23 21:33:28.38�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:33:28.693�[0m �[1mSTEP:�[0m Creating a pod in the namespace �[38;5;243m01/24/23 21:33:28.897�[0m �[1mSTEP:�[0m Waiting for the pod to have running status �[38;5;243m01/24/23 21:33:29.007�[0m Jan 24 21:33:29.007: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-1712" to be "running" Jan 24 21:33:29.109: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 102.254078ms Jan 24 21:33:31.213: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205724397s Jan 24 21:33:33.213: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206174153s Jan 24 21:33:35.213: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.206462781s Jan 24 21:33:35.213: INFO: Pod "test-pod" satisfied condition "running" �[1mSTEP:�[0m Deleting the namespace �[38;5;243m01/24/23 21:33:35.213�[0m �[1mSTEP:�[0m Waiting for the namespace to be removed. �[38;5;243m01/24/23 21:33:35.326�[0m �[1mSTEP:�[0m Recreating the namespace �[38;5;243m01/24/23 21:33:46.43�[0m �[1mSTEP:�[0m Verifying there are no pods in the namespace �[38;5;243m01/24/23 21:33:46.744�[0m [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 24 21:33:46.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-8380" for this suite. �[38;5;243m01/24/23 21:33:46.954�[0m �[1mSTEP:�[0m Destroying namespace "nsdeletetest-1712" for this suite. �[38;5;243m01/24/23 21:33:47.063�[0m Jan 24 21:33:47.165: INFO: Namespace nsdeletetest-1712 was already deleted �[1mSTEP:�[0m Destroying namespace "nsdeletetest-2459" for this suite. �[38;5;243m01/24/23 21:33:47.165�[0m {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","completed":44,"skipped":3635,"failed":0} �[38;5;243m------------------------------�[0m �[38;5;10m• [19.610 seconds]�[0m [sig-api-machinery] Namespaces [Serial] �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should ensure that all pods are removed when a namespace is deleted [Conformance] �[38;5;243mtest/e2e/apimachinery/namespace.go:242�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:33:27.661�[0m Jan 24 21:33:27.662: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m01/24/23 21:33:27.663�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:33:27.976�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:33:28.178�[0m [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:242 �[1mSTEP:�[0m Creating a test namespace �[38;5;243m01/24/23 21:33:28.38�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:33:28.693�[0m �[1mSTEP:�[0m Creating a pod in the namespace �[38;5;243m01/24/23 21:33:28.897�[0m �[1mSTEP:�[0m Waiting for the pod to have running status �[38;5;243m01/24/23 21:33:29.007�[0m Jan 24 21:33:29.007: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-1712" to be "running" Jan 24 21:33:29.109: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 102.254078ms Jan 24 21:33:31.213: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205724397s Jan 24 21:33:33.213: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206174153s Jan 24 21:33:35.213: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.206462781s Jan 24 21:33:35.213: INFO: Pod "test-pod" satisfied condition "running" �[1mSTEP:�[0m Deleting the namespace �[38;5;243m01/24/23 21:33:35.213�[0m �[1mSTEP:�[0m Waiting for the namespace to be removed. �[38;5;243m01/24/23 21:33:35.326�[0m �[1mSTEP:�[0m Recreating the namespace �[38;5;243m01/24/23 21:33:46.43�[0m �[1mSTEP:�[0m Verifying there are no pods in the namespace �[38;5;243m01/24/23 21:33:46.744�[0m [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Jan 24 21:33:46.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "namespaces-8380" for this suite. �[38;5;243m01/24/23 21:33:46.954�[0m �[1mSTEP:�[0m Destroying namespace "nsdeletetest-1712" for this suite. �[38;5;243m01/24/23 21:33:47.063�[0m Jan 24 21:33:47.165: INFO: Namespace nsdeletetest-1712 was already deleted �[1mSTEP:�[0m Destroying namespace "nsdeletetest-2459" for this suite. �[38;5;243m01/24/23 21:33:47.165�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that NodeSelector is respected if not matching [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/predicates.go:438�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/24/23 21:33:47.28�[0m Jan 24 21:33:47.280: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m01/24/23 21:33:47.282�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/24/23 21:33:47.591�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/24/23 21:33:47.795�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 24 21:33:47.998: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 24 21:33:48.215: INFO: Waiting for terminating namespaces to be deleted... Jan 24 21:33:48.318: INFO: Logging pods the apiserver thinks is on node capz-conf-jzg2c before test Jan 24 21:33:48.428: INFO: calico-node-windows-77tct from calico-system started at 2023-01-24 19:20:29 +0000 UTC (2 container statuses recorded) Jan 24 21:33:48.428: INFO: Container calico-node-felix ready: true, restart count 1 Jan 24 21:33:48.428: INFO: Container calico-node-startup ready: true, restart count 0 Jan 24 21:33:48.428: INFO: containerd-logger-xt7tr from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 21:33:48.428: INFO: Container containerd-logger ready: true, restart count 0 Jan 24 21:33:48.428: INFO: csi-azuredisk-node-win-l79cl from kube-system started at 2023-01-24 19:21:00 +0000 UTC (3 container statuses recorded) Jan 24 21:33:48.428: INFO: Container azuredisk ready: true, restart count 0 Jan 24 21:33:48.428: INFO: Container liveness-probe ready: true, restart count 0 Jan 24 21:33:48.428: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 24 21:33:48.428: INFO: csi-proxy-xnqhl from kube-system started at 2023-01-24 19:21:00 +0000 UTC (1 container statuses recorded) Jan 24 21:33:48.428: INFO: Container csi-proxy ready: true, restart count 0 Jan 24 21:33:48.428: INFO: kube-proxy-windows-6szqk from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 21:33:48.428: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 21:33:48.428: INFO: Logging pods the apiserver thinks is on node capz-conf-s4kcn before test Jan 24 21:33:48.540: INFO: calico-node-windows-t9nl5 from calico-system started at 2023-01-24 19:20:29 +0000 UTC (2 container statuses recorded) Jan 24 21:33:48.540: INFO: Container calico-node-felix ready: true, restart count 1 Jan 24 21:33:48.540: INFO: Container calico-node-startup ready: true, restart count 0 Jan 24 21:33:48.540: INFO: containerd-logger-6ndvk from kube-system started at 2023-01-24 19:20:29 +0000 UTC (1 container statuses recorded) Jan 24 21:33:48.540: INFO: Container containerd-logger ready: true, restart count 0 Jan 24 21:33:48.540: INFO: csi-azuredisk-node-win-8mbvt from kube-system started at 2023-01-24 19:20:59 +0000 UTC (3 container statuses recorded) Jan 24 21:33:48.540: INFO: Container azuredisk ready: true, restart count 0 Jan 24 21:33:48.540: INFO: Container liveness-probe ready: true, restart count 0 Jan 24 21:33:48.540: INFO: Container node-driver-