Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 2 succeeded |
Started | |
Elapsed | 3h7m |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error: <*errors.withStack | 0xc0009ac150>: { error: <*errors.withMessage | 0xc000956820>{ cause: <*errors.errorString | 0xc0004f4b60>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x3143379, 0x353bac7, 0x18e62fb, 0x18f9df8, 0x147c741], } Unable to run conformance tests: error container run failed with exit code 1 occurred In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/23/23 01:56:33.91from junit.e2e_suite.1.xml
> Enter [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/22/23 23:12:14.386 INFO: Cluster name is capz-conf-zs64h3 STEP: Creating namespace "capz-conf-zs64h3" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:12:14.386 Jan 22 23:12:14.386: INFO: starting to create namespace for hosting the "capz-conf-zs64h3" test spec INFO: Creating namespace capz-conf-zs64h3 INFO: Creating event watcher for namespace "capz-conf-zs64h3" < Exit [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/22/23 23:12:14.453 (67ms) > Enter [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/22/23 23:12:14.453 conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/22/23 23:12:14.453 conformance-tests Name | N | Min | Median | Mean | StdDev | Max ============================================================================================ cluster creation [duration] | 1 | 9m39.0136s | 9m39.0136s | 9m39.0136s | 0s | 9m39.0136s INFO: Creating the workload cluster with name "capz-conf-zs64h3" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.24.11-rc.0.6+7c685ed7305e76, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-zs64h3 --infrastructure (default) --kubernetes-version v1.24.11-rc.0.6+7c685ed7305e76 --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-ci-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/22/23 23:12:18.451 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/22/23 23:14:08.576 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/22/23 23:14:08.577 Jan 22 23:16:48.786: INFO: getting history for release projectcalico Jan 22 23:16:48.821: INFO: Release projectcalico does not exist, installing it Jan 22 23:16:49.666: INFO: creating 1 resource(s) Jan 22 23:16:49.720: INFO: creating 1 resource(s) Jan 22 23:16:49.766: INFO: creating 1 resource(s) Jan 22 23:16:49.823: INFO: creating 1 resource(s) Jan 22 23:16:49.872: INFO: creating 1 resource(s) Jan 22 23:16:49.928: INFO: creating 1 resource(s) Jan 22 23:16:50.037: INFO: creating 1 resource(s) Jan 22 23:16:50.108: INFO: creating 1 resource(s) Jan 22 23:16:50.154: INFO: creating 1 resource(s) Jan 22 23:16:50.199: INFO: creating 1 resource(s) Jan 22 23:16:50.245: INFO: creating 1 resource(s) Jan 22 23:16:50.299: INFO: creating 1 resource(s) Jan 22 23:16:50.350: INFO: creating 1 resource(s) Jan 22 23:16:50.397: INFO: creating 1 resource(s) Jan 22 23:16:50.451: INFO: creating 1 resource(s) Jan 22 23:16:50.513: INFO: creating 1 resource(s) Jan 22 23:16:50.576: INFO: creating 1 resource(s) Jan 22 23:16:50.651: INFO: creating 1 resource(s) Jan 22 23:16:50.724: INFO: creating 1 resource(s) Jan 22 23:16:50.861: INFO: creating 1 resource(s) Jan 22 23:16:51.141: INFO: creating 1 resource(s) Jan 22 23:16:51.182: INFO: Clearing discovery cache Jan 22 23:16:51.182: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 22 23:16:54.899: INFO: creating 1 resource(s) Jan 22 23:16:55.375: INFO: creating 6 resource(s) Jan 22 23:16:55.967: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/22/23 23:16:56.28 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:16:56.424 Jan 22 23:16:56.424: INFO: starting to wait for deployment to become available Jan 22 23:17:06.492: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.068451587s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/22/23 23:17:07.551 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:17:07.726 Jan 22 23:17:07.726: INFO: starting to wait for deployment to become available Jan 22 23:18:08.207: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m0.481121193s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:18:08.515 Jan 22 23:18:08.515: INFO: starting to wait for deployment to become available Jan 22 23:18:08.555: INFO: Deployment calico-system/calico-typha is now available, took 39.558291ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/22/23 23:18:08.555 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:18:08.816 Jan 22 23:18:08.816: INFO: starting to wait for deployment to become available Jan 22 23:18:18.885: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 10.068921067s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/22/23 23:18:18.885 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:18:19.055 Jan 22 23:18:19.055: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 22 23:18:19.091: INFO: 1 daemonset calico-system/calico-node pods are running, took 36.039112ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:91 @ 01/22/23 23:18:19.091 STEP: waiting for daemonset calico-system/calico-node-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:18:19.233 Jan 22 23:18:19.233: INFO: waiting for daemonset calico-system/calico-node-windows to be complete Jan 22 23:18:19.267: INFO: 0 daemonset calico-system/calico-node-windows pods are running, took 33.545914ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:97 @ 01/22/23 23:18:19.267 STEP: waiting for daemonset kube-system/kube-proxy-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:18:19.433 Jan 22 23:18:19.433: INFO: waiting for daemonset kube-system/kube-proxy-windows to be complete Jan 22 23:18:19.466: INFO: 0 daemonset kube-system/kube-proxy-windows pods are running, took 33.603104ms INFO: Waiting for the first control plane machine managed by capz-conf-zs64h3/capz-conf-zs64h3-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/22/23 23:18:19.494 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/22/23 23:18:19.503 Jan 22 23:18:19.559: INFO: getting history for release azuredisk-csi-driver-oot Jan 22 23:18:19.594: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 22 23:18:21.926: INFO: creating 1 resource(s) Jan 22 23:18:22.050: INFO: creating 18 resource(s) Jan 22 23:18:22.383: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/22/23 23:18:22.408 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:18:22.551 Jan 22 23:18:22.551: INFO: starting to wait for deployment to become available Jan 22 23:19:02.731: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.179429923s STEP: Waiting for Running azure-disk-csi node pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:86 @ 01/22/23 23:19:02.731 STEP: waiting for daemonset kube-system/csi-azuredisk-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:19:02.9 Jan 22 23:19:02.900: INFO: waiting for daemonset kube-system/csi-azuredisk-node to be complete Jan 22 23:19:02.933: INFO: 1 daemonset kube-system/csi-azuredisk-node pods are running, took 33.495955ms STEP: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/22/23 23:19:03.099 Jan 22 23:19:03.099: INFO: waiting for daemonset kube-system/csi-azuredisk-node-win to be complete Jan 22 23:19:03.132: INFO: 0 daemonset kube-system/csi-azuredisk-node-win pods are running, took 33.098847ms INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-zs64h3/capz-conf-zs64h3-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:165 @ 01/22/23 23:19:03.146 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:196 @ 01/22/23 23:19:03.153 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/22/23 23:19:03.181 STEP: Checking all the machines controlled by capz-conf-zs64h3-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/22/23 23:19:03.194 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/machinedeployment_helpers.go:102 @ 01/22/23 23:19:03.205 STEP: Checking all the machines controlled by capz-conf-zs64h3-md-win are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/22/23 23:21:53.503 INFO: Waiting for the machine pools to be provisioned INFO: Using repo-list '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/kubetest/repo-list.yaml' for version 'v1.24.11-rc.0.6+7c685ed7305e76' STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-nodes=1" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--report-prefix=kubetest." "--num-nodes=2" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "-ginkgo.progress=true" "-ginkgo.skip=\\[LinuxOnly\\]|\\[Excluded:WindowsDocker\\]|device.plugin.for.Windows" "-ginkgo.slowSpecThreshold=120" "-node-os-distro=windows" "-dump-logs-on-failure=true" "-ginkgo.focus=(\\[sig-windows\\]|\\[sig-scheduling\\].SchedulerPreemption|\\[sig-autoscaling\\].\\[Feature:HPA\\]|\\[sig-apps\\].CronJob).*(\\[Serial\\]|\\[Slow\\])|(\\[Serial\\]|\\[Slow\\]).*(\\[Conformance\\]|\\[NodeConformance\\])|\\[sig-api-machinery\\].Garbage.collector" "-ginkgo.trace=true" "-ginkgo.v=true" "-prepull-images=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=0"] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/22/23 23:21:53.811 I0122 23:22:01.018730 14 e2e.go:129] Starting e2e run "e9b272d5-52c6-4cae-a53c-abd7836f7454" on Ginkgo node 1 {"msg":"Test Suite starting","total":61,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1674429720�[0m - Will randomize all specs Will run �[1m61�[0m of �[1m6973�[0m specs Jan 22 23:22:03.653: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 22 23:22:03.657: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 22 23:22:03.874: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 22 23:22:04.018: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 22 23:22:04.018: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Jan 22 23:22:04.018: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 22 23:22:04.072: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'containerd-logger' (0 seconds elapsed) Jan 22 23:22:04.072: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node' (0 seconds elapsed) Jan 22 23:22:04.072: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-azuredisk-node-win' (0 seconds elapsed) Jan 22 23:22:04.072: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-proxy' (0 seconds elapsed) Jan 22 23:22:04.072: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 22 23:22:04.072: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-windows' (0 seconds elapsed) Jan 22 23:22:04.072: INFO: Pre-pulling images so that they are cached for the tests. Jan 22 23:22:04.354: INFO: Waiting for img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39 Jan 22 23:22:04.407: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:22:04.461: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39: 0 Jan 22 23:22:04.461: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:22:13.510: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:22:13.562: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39: 0 Jan 22 23:22:13.562: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:22:22.507: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:22:22.558: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39: 0 Jan 22 23:22:22.558: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:22:31.508: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:22:31.559: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39: 2 Jan 22 23:22:31.559: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-k8s.gcr.io-e2e-test-images-agnhost-2.39 Jan 22 23:22:31.559: INFO: Waiting for img-pull-k8s.gcr.io-e2e-test-images-busybox-1.29-2 Jan 22 23:22:31.602: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:22:31.653: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-busybox-1.29-2: 1 Jan 22 23:22:31.653: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:22:40.698: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:22:40.749: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-busybox-1.29-2: 2 Jan 22 23:22:40.749: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-k8s.gcr.io-e2e-test-images-busybox-1.29-2 Jan 22 23:22:40.749: INFO: Waiting for img-pull-k8s.gcr.io-e2e-test-images-httpd-2.4.38-2 Jan 22 23:22:40.793: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:22:40.844: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-httpd-2.4.38-2: 2 Jan 22 23:22:40.844: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-k8s.gcr.io-e2e-test-images-httpd-2.4.38-2 Jan 22 23:22:40.844: INFO: Waiting for img-pull-k8s.gcr.io-e2e-test-images-nginx-1.14-2 Jan 22 23:22:40.889: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:22:40.939: INFO: Number of nodes with available pods controlled by daemonset img-pull-k8s.gcr.io-e2e-test-images-nginx-1.14-2: 2 Jan 22 23:22:40.939: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-k8s.gcr.io-e2e-test-images-nginx-1.14-2 Jan 22 23:22:40.979: INFO: e2e test version: v1.24.11-rc.0.6+7c685ed7305e76 Jan 22 23:22:41.011: INFO: kube-apiserver version: v1.24.11-rc.0.6+7c685ed7305e76 Jan 22 23:22:41.011: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 22 23:22:41.045: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates lower priority pod preemption by critical pod [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:22:41.047: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption W0122 23:22:41.183576 14 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 22 23:22:41.184: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jan 22 23:22:41.221: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 22 23:22:41.459: INFO: Waiting up to 1m0s for all nodes to be ready Jan 22 23:23:41.767: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Create pods that use 4/5 of node resources. Jan 22 23:23:41.892: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 22 23:23:41.930: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 22 23:23:42.012: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 22 23:23:42.048: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP�[0m: Wait for pods to be scheduled. �[1mSTEP�[0m: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 22 23:24:18.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-1329" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":61,"completed":1,"skipped":59,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:24:18.939: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics Jan 22 23:24:20.184: INFO: The status of Pod kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj is Running (Ready = true) Jan 22 23:24:20.545: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 22 23:24:20.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-5889" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":61,"completed":2,"skipped":270,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90mReplicationController light�[0m �[1mShould scale from 2 pods to 1 pod [Slow]�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:82�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:24:20.617: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 2 pods to 1 pod [Slow] test/e2e/autoscaling/horizontal_pod_autoscaling.go:82 �[1mSTEP�[0m: Running consuming RC rc-light via /v1, Kind=ReplicationController with 2 replicas �[1mSTEP�[0m: creating replication controller rc-light in namespace horizontal-pod-autoscaling-7665 I0122 23:24:20.957737 14 runners.go:193] Created replication controller with name: rc-light, namespace: horizontal-pod-autoscaling-7665, replica count: 2 I0122 23:24:31.009368 14 runners.go:193] rc-light Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: Running controller �[1mSTEP�[0m: creating replication controller rc-light-ctrl in namespace horizontal-pod-autoscaling-7665 I0122 23:24:31.093226 14 runners.go:193] Created replication controller with name: rc-light-ctrl, namespace: horizontal-pod-autoscaling-7665, replica count: 1 I0122 23:24:41.144461 14 runners.go:193] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 22 23:24:46.145: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1 Jan 22 23:24:46.178: INFO: RC rc-light: consume 50 millicores in total Jan 22 23:24:46.178: INFO: RC rc-light: setting consumption to 50 millicores in total Jan 22 23:24:46.178: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:24:46.178: INFO: RC rc-light: consume 0 MB in total Jan 22 23:24:46.178: INFO: RC rc-light: setting consumption to 0 MB in total Jan 22 23:24:46.179: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:24:46.179: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:24:46.179: INFO: RC rc-light: consume custom metric 0 in total Jan 22 23:24:46.179: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:24:46.179: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:24:46.179: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:24:49.254: INFO: RC rc-light: setting bump of metric QPS to 0 in total Jan 22 23:24:49.344: INFO: waiting for 1 replicas (current: 2) Jan 22 23:25:09.381: INFO: waiting for 1 replicas (current: 2) Jan 22 23:25:19.256: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:25:19.256: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:25:19.256: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:25:19.256: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:25:19.256: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:25:19.256: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:25:29.379: INFO: waiting for 1 replicas (current: 2) Jan 22 23:25:49.295: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:25:49.295: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:25:49.295: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:25:49.296: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:25:49.305: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:25:49.305: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:25:49.378: INFO: waiting for 1 replicas (current: 2) Jan 22 23:26:09.412: INFO: waiting for 1 replicas (current: 2) Jan 22 23:26:19.330: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:26:19.330: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:26:19.330: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:26:19.330: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:26:19.344: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:26:19.344: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:26:29.385: INFO: waiting for 1 replicas (current: 2) Jan 22 23:26:49.366: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:26:49.366: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:26:49.367: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:26:49.367: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:26:49.381: INFO: waiting for 1 replicas (current: 2) Jan 22 23:26:49.383: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:26:49.383: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:27:09.381: INFO: waiting for 1 replicas (current: 2) Jan 22 23:27:19.402: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:27:19.402: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:27:19.402: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:27:19.402: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:27:19.422: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:27:19.423: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:27:29.381: INFO: waiting for 1 replicas (current: 2) Jan 22 23:27:49.380: INFO: waiting for 1 replicas (current: 2) Jan 22 23:27:49.438: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:27:49.438: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:27:49.438: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:27:49.438: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:27:49.463: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:27:49.463: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:28:09.381: INFO: waiting for 1 replicas (current: 2) Jan 22 23:28:19.476: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:28:19.476: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:28:19.476: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:28:19.476: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:28:19.501: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:28:19.501: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:28:29.378: INFO: waiting for 1 replicas (current: 2) Jan 22 23:28:49.381: INFO: waiting for 1 replicas (current: 2) Jan 22 23:28:49.511: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:28:49.511: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:28:49.511: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:28:49.511: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:28:49.540: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:28:49.541: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:29:09.380: INFO: waiting for 1 replicas (current: 2) Jan 22 23:29:19.546: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:29:19.546: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:29:19.546: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:29:19.547: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:29:19.580: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:29:19.580: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:29:29.378: INFO: waiting for 1 replicas (current: 2) Jan 22 23:29:49.378: INFO: waiting for 1 replicas (current: 2) Jan 22 23:29:49.583: INFO: RC rc-light: sending request to consume 0 MB Jan 22 23:29:49.583: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:29:49.583: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Jan 22 23:29:49.584: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:29:49.621: INFO: RC rc-light: sending request to consume 50 millicores Jan 22 23:29:49.622: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7665/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Jan 22 23:30:09.377: INFO: waiting for 1 replicas (current: 1) �[1mSTEP�[0m: Removing consuming RC rc-light Jan 22 23:30:09.414: INFO: RC rc-light: stopping metric consumer Jan 22 23:30:09.414: INFO: RC rc-light: stopping CPU consumer Jan 22 23:30:09.414: INFO: RC rc-light: stopping mem consumer �[1mSTEP�[0m: deleting ReplicationController rc-light in namespace horizontal-pod-autoscaling-7665, will wait for the garbage collector to delete the pods Jan 22 23:30:19.540: INFO: Deleting ReplicationController rc-light took: 40.088799ms Jan 22 23:30:19.641: INFO: Terminating ReplicationController rc-light pods took: 100.995422ms �[1mSTEP�[0m: deleting ReplicationController rc-light-ctrl in namespace horizontal-pod-autoscaling-7665, will wait for the garbage collector to delete the pods Jan 22 23:30:21.840: INFO: Deleting ReplicationController rc-light-ctrl took: 35.797542ms Jan 22 23:30:21.941: INFO: Terminating ReplicationController rc-light-ctrl pods took: 101.086443ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 22 23:30:23.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-7665" for this suite. �[32m• [SLOW TEST:363.257 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m ReplicationController light �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:69�[0m Should scale from 2 pods to 1 pod [Slow] �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:82�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]","total":61,"completed":3,"skipped":305,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete RS created by deployment when not orphaning [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:30:23.876: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: Gathering metrics Jan 22 23:30:24.472: INFO: The status of Pod kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj is Running (Ready = true) Jan 22 23:30:24.812: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 22 23:30:24.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-279" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":61,"completed":4,"skipped":332,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould run and stop complex daemon [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:30:24.889: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop complex daemon [Conformance] test/e2e/framework/framework.go:652 Jan 22 23:30:25.260: INFO: Creating daemon "daemon-set" with a node selector �[1mSTEP�[0m: Initially, daemon pods should not be running on any nodes. Jan 22 23:30:25.336: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:25.336: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP�[0m: Change node label to blue, check that daemon pod is launched. Jan 22 23:30:25.493: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:25.493: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:26.534: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:26.534: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:27.527: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:27.527: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:28.526: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:28.527: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:29.527: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:29.527: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:30.527: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:30:30.527: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set �[1mSTEP�[0m: Update the node label to green, and wait for daemons to be unscheduled Jan 22 23:30:30.671: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:30.671: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP�[0m: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 22 23:30:30.748: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:30.748: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:31.782: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:31.782: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:32.783: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:32.783: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:33.783: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:33.783: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:34.783: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:34.783: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:35.782: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:35.782: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:36.783: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:36.783: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:37.783: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:37.783: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:38.783: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:38.783: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:39.783: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:39.783: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:30:40.786: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:30:40.786: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3375, will wait for the garbage collector to delete the pods Jan 22 23:30:40.974: INFO: Deleting DaemonSet.extensions daemon-set took: 37.188131ms Jan 22 23:30:41.075: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.579941ms Jan 22 23:30:46.109: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:30:46.109: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 22 23:30:46.142: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"4175"},"items":null} Jan 22 23:30:46.174: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"4175"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:30:46.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-3375" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":61,"completed":5,"skipped":379,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould support cascading deletion of custom resources�[0m �[37mtest/e2e/apimachinery/garbage_collector.go:905�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:30:46.398: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Jan 22 23:30:46.633: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 22 23:30:48.877: INFO: created owner resource "ownervd78x" Jan 22 23:30:48.913: INFO: created dependent resource "dependenttnhdb" Jan 22 23:30:48.987: INFO: created canary resource "canaryhr48x" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 22 23:31:04.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-203" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":61,"completed":6,"skipped":467,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] ReplicaSet�[0m �[1mShould scale from 5 pods to 3 pods and from 3 to 1�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:53�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:31:04.305: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 5 pods to 3 pods and from 3 to 1 test/e2e/autoscaling/horizontal_pod_autoscaling.go:53 �[1mSTEP�[0m: Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 5 replicas �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-6487 �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-6487 I0122 23:31:04.620711 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-6487, replica count: 5 �[1mSTEP�[0m: Running controller I0122 23:31:14.671613 14 runners.go:193] rs Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-6487 I0122 23:31:14.755544 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-6487, replica count: 1 I0122 23:31:24.806022 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 22 23:31:29.806: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 22 23:31:29.840: INFO: RC rs: consume 325 millicores in total Jan 22 23:31:29.840: INFO: RC rs: sending request to consume 0 millicores Jan 22 23:31:29.840: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=0&requestSizeMillicores=100 } Jan 22 23:31:29.887: INFO: RC rs: setting consumption to 325 millicores in total Jan 22 23:31:29.887: INFO: RC rs: consume 0 MB in total Jan 22 23:31:29.887: INFO: RC rs: setting consumption to 0 MB in total Jan 22 23:31:29.887: INFO: RC rs: sending request to consume 0 MB Jan 22 23:31:29.887: INFO: RC rs: consume custom metric 0 in total Jan 22 23:31:29.887: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:31:29.888: INFO: RC rs: setting bump of metric QPS to 0 in total Jan 22 23:31:29.888: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:31:29.888: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:31:29.960: INFO: waiting for 3 replicas (current: 5) Jan 22 23:31:49.994: INFO: waiting for 3 replicas (current: 5) Jan 22 23:31:59.887: INFO: RC rs: sending request to consume 325 millicores Jan 22 23:31:59.887: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 22 23:31:59.929: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:31:59.929: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:31:59.930: INFO: RC rs: sending request to consume 0 MB Jan 22 23:31:59.930: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:32:09.997: INFO: waiting for 3 replicas (current: 5) Jan 22 23:32:29.968: INFO: RC rs: sending request to consume 0 MB Jan 22 23:32:29.968: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:32:29.968: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:32:29.968: INFO: RC rs: sending request to consume 325 millicores Jan 22 23:32:29.969: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 22 23:32:29.968: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:32:29.997: INFO: waiting for 3 replicas (current: 5) Jan 22 23:32:49.997: INFO: waiting for 3 replicas (current: 5) Jan 22 23:33:00.028: INFO: RC rs: sending request to consume 0 MB Jan 22 23:33:00.028: INFO: RC rs: sending request to consume 325 millicores Jan 22 23:33:00.028: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 22 23:33:00.028: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:33:00.028: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:33:00.028: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:33:09.998: INFO: waiting for 3 replicas (current: 5) Jan 22 23:33:29.996: INFO: waiting for 3 replicas (current: 5) Jan 22 23:33:30.065: INFO: RC rs: sending request to consume 0 MB Jan 22 23:33:30.065: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:33:30.071: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:33:30.071: INFO: RC rs: sending request to consume 325 millicores Jan 22 23:33:30.071: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:33:30.071: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 22 23:33:49.995: INFO: waiting for 3 replicas (current: 5) Jan 22 23:34:00.104: INFO: RC rs: sending request to consume 0 MB Jan 22 23:34:00.104: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:34:00.113: INFO: RC rs: sending request to consume 325 millicores Jan 22 23:34:00.113: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:34:00.113: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:34:00.114: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 22 23:34:09.996: INFO: waiting for 3 replicas (current: 5) Jan 22 23:34:29.996: INFO: waiting for 3 replicas (current: 5) Jan 22 23:34:30.139: INFO: RC rs: sending request to consume 0 MB Jan 22 23:34:30.140: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:34:30.155: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:34:30.155: INFO: RC rs: sending request to consume 325 millicores Jan 22 23:34:30.155: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 22 23:34:30.155: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:34:49.996: INFO: waiting for 3 replicas (current: 5) Jan 22 23:35:00.175: INFO: RC rs: sending request to consume 0 MB Jan 22 23:35:00.176: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:35:00.190: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:35:00.190: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:35:00.195: INFO: RC rs: sending request to consume 325 millicores Jan 22 23:35:00.196: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 22 23:35:09.994: INFO: waiting for 3 replicas (current: 5) Jan 22 23:35:29.997: INFO: waiting for 3 replicas (current: 5) Jan 22 23:35:30.214: INFO: RC rs: sending request to consume 0 MB Jan 22 23:35:30.215: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:35:30.226: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:35:30.226: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:35:30.238: INFO: RC rs: sending request to consume 325 millicores Jan 22 23:35:30.238: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 22 23:35:49.995: INFO: waiting for 3 replicas (current: 5) Jan 22 23:36:00.251: INFO: RC rs: sending request to consume 0 MB Jan 22 23:36:00.252: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:36:00.262: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:36:00.262: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:36:00.280: INFO: RC rs: sending request to consume 325 millicores Jan 22 23:36:00.280: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 22 23:36:09.997: INFO: waiting for 3 replicas (current: 5) Jan 22 23:36:29.998: INFO: waiting for 3 replicas (current: 5) Jan 22 23:36:30.287: INFO: RC rs: sending request to consume 0 MB Jan 22 23:36:30.287: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:36:30.297: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:36:30.297: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:36:30.321: INFO: RC rs: sending request to consume 325 millicores Jan 22 23:36:30.321: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 22 23:36:49.997: INFO: waiting for 3 replicas (current: 3) Jan 22 23:36:49.997: INFO: RC rs: consume 10 millicores in total Jan 22 23:36:49.997: INFO: RC rs: setting consumption to 10 millicores in total Jan 22 23:36:50.030: INFO: waiting for 1 replicas (current: 3) Jan 22 23:37:00.324: INFO: RC rs: sending request to consume 0 MB Jan 22 23:37:00.324: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:37:00.332: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:37:00.332: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:37:00.362: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:37:00.362: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:37:10.065: INFO: waiting for 1 replicas (current: 3) Jan 22 23:37:30.068: INFO: waiting for 1 replicas (current: 3) Jan 22 23:37:30.360: INFO: RC rs: sending request to consume 0 MB Jan 22 23:37:30.360: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:37:30.368: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:37:30.368: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:37:30.402: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:37:30.402: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:37:50.065: INFO: waiting for 1 replicas (current: 3) Jan 22 23:38:00.396: INFO: RC rs: sending request to consume 0 MB Jan 22 23:38:00.396: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:38:00.403: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:38:00.403: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:38:00.443: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:38:00.443: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:38:10.065: INFO: waiting for 1 replicas (current: 3) Jan 22 23:38:30.065: INFO: waiting for 1 replicas (current: 3) Jan 22 23:38:30.432: INFO: RC rs: sending request to consume 0 MB Jan 22 23:38:30.432: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:38:30.438: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:38:30.438: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:38:30.484: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:38:30.484: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:38:50.064: INFO: waiting for 1 replicas (current: 3) Jan 22 23:39:00.472: INFO: RC rs: sending request to consume 0 MB Jan 22 23:39:00.472: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:39:00.474: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:39:00.474: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:39:00.523: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:39:00.523: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:39:10.064: INFO: waiting for 1 replicas (current: 3) Jan 22 23:39:30.064: INFO: waiting for 1 replicas (current: 3) Jan 22 23:39:30.508: INFO: RC rs: sending request to consume 0 MB Jan 22 23:39:30.508: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:39:30.508: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:39:30.508: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:39:30.565: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:39:30.565: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:39:50.067: INFO: waiting for 1 replicas (current: 3) Jan 22 23:40:00.547: INFO: RC rs: sending request to consume 0 MB Jan 22 23:40:00.547: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:40:00.548: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:40:00.548: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:40:00.608: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:40:00.608: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:40:10.064: INFO: waiting for 1 replicas (current: 3) Jan 22 23:40:30.068: INFO: waiting for 1 replicas (current: 3) Jan 22 23:40:30.584: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:40:30.584: INFO: RC rs: sending request to consume 0 MB Jan 22 23:40:30.584: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:40:30.585: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:40:30.649: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:40:30.649: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:40:50.067: INFO: waiting for 1 replicas (current: 3) Jan 22 23:41:00.624: INFO: RC rs: sending request to consume 0 MB Jan 22 23:41:00.624: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:41:00.624: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:41:00.624: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:41:00.692: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:41:00.692: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:41:10.068: INFO: waiting for 1 replicas (current: 3) Jan 22 23:41:30.065: INFO: waiting for 1 replicas (current: 3) Jan 22 23:41:30.660: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:41:30.660: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:41:30.660: INFO: RC rs: sending request to consume 0 MB Jan 22 23:41:30.660: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:41:30.736: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:41:30.736: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:41:50.065: INFO: waiting for 1 replicas (current: 2) Jan 22 23:42:00.696: INFO: RC rs: sending request to consume 0 MB Jan 22 23:42:00.697: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:42:00.697: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:42:00.697: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:42:00.778: INFO: RC rs: sending request to consume 10 millicores Jan 22 23:42:00.778: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6487/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 22 23:42:10.068: INFO: waiting for 1 replicas (current: 1) �[1mSTEP�[0m: Removing consuming RC rs Jan 22 23:42:10.109: INFO: RC rs: stopping metric consumer Jan 22 23:42:10.109: INFO: RC rs: stopping mem consumer Jan 22 23:42:10.109: INFO: RC rs: stopping CPU consumer �[1mSTEP�[0m: deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-6487, will wait for the garbage collector to delete the pods Jan 22 23:42:20.238: INFO: Deleting ReplicaSet.apps rs took: 40.54808ms Jan 22 23:42:20.339: INFO: Terminating ReplicaSet.apps rs pods took: 100.820434ms �[1mSTEP�[0m: deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-6487, will wait for the garbage collector to delete the pods Jan 22 23:42:22.029: INFO: Deleting ReplicationController rs-ctrl took: 37.123077ms Jan 22 23:42:22.130: INFO: Terminating ReplicationController rs-ctrl pods took: 100.982834ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 22 23:42:23.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-6487" for this suite. �[32m• [SLOW TEST:679.248 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:48�[0m Should scale from 5 pods to 3 pods and from 3 to 1 �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:53�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1","total":61,"completed":7,"skipped":507,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould ensure that all pods are removed when a namespace is deleted [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:42:23.556: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename namespaces �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a test namespace �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Creating a pod in the namespace �[1mSTEP�[0m: Waiting for the pod to have running status �[1mSTEP�[0m: Deleting the namespace �[1mSTEP�[0m: Waiting for the namespace to be removed. �[1mSTEP�[0m: Recreating the namespace �[1mSTEP�[0m: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:188 Jan 22 23:42:39.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "namespaces-7376" for this suite. �[1mSTEP�[0m: Destroying namespace "nsdeletetest-8918" for this suite. Jan 22 23:42:39.379: INFO: Namespace nsdeletetest-8918 was already deleted �[1mSTEP�[0m: Destroying namespace "nsdeletetest-6879" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":61,"completed":8,"skipped":737,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow]�[0m �[90mGMSA support�[0m �[1mworks end to end�[0m �[37mtest/e2e/windows/gmsa_full.go:97�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:42:39.422: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gmsa-full-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works end to end test/e2e/windows/gmsa_full.go:97 �[1mSTEP�[0m: finding the worker node that fulfills this test's assumptions Jan 22 23:42:39.686: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 22 23:42:39.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gmsa-full-test-windows-3482" for this suite. �[36m�[1mS [SKIPPING] [0.352 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[90mtest/e2e/windows/framework.go:27�[0m GMSA support �[90mtest/e2e/windows/gmsa_full.go:96�[0m �[36m�[1mworks end to end [It]�[0m �[90mtest/e2e/windows/gmsa_full.go:97�[0m �[36mExpected to find exactly one node with the "agentpool=windowsgmsa" label, found 0�[0m test/e2e/windows/gmsa_full.go:103 �[90m------------------------------�[0m �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould ensure that all services are removed when a namespace is deleted [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:42:39.776: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename namespaces �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a test namespace �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Creating a service in the namespace �[1mSTEP�[0m: Deleting the namespace �[1mSTEP�[0m: Waiting for the namespace to be removed. �[1mSTEP�[0m: Recreating the namespace �[1mSTEP�[0m: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:188 Jan 22 23:42:46.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "namespaces-6038" for this suite. �[1mSTEP�[0m: Destroying namespace "nsdeletetest-9123" for this suite. Jan 22 23:42:46.545: INFO: Namespace nsdeletetest-9123 was already deleted �[1mSTEP�[0m: Destroying namespace "nsdeletetest-9390" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":61,"completed":9,"skipped":912,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Kubelet-Stats [Serial]�[0m �[90mKubelet stats collection for Windows nodes�[0m �[0mwhen running 10 pods�[0m �[1mshould return within 10 seconds�[0m �[37mtest/e2e/windows/kubelet_stats.go:47�[0m [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:42:46.585: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-stats-test-windows-serial �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should return within 10 seconds test/e2e/windows/kubelet_stats.go:47 �[1mSTEP�[0m: Selecting a Windows node Jan 22 23:42:46.852: INFO: Using node: capz-conf-2xrmj �[1mSTEP�[0m: Scheduling 10 pods Jan 22 23:42:46.928: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:46.932: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:46.933: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:46.968: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:46.969: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:46.969: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:46.969: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:46.970: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:46.970: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:46.970: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:48.965: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:48.967: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:48.967: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:49.003: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:49.003: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:49.005: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:49.005: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:49.005: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:49.005: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:49.005: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:50.964: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:50.967: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:50.967: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:51.005: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:51.005: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:51.005: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:51.006: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:51.006: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:51.007: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:51.007: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:52.962: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:52.966: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:52.967: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:53.003: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:53.004: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:53.004: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:53.005: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:53.006: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:53.006: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:53.007: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:54.963: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:54.965: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:54.965: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:55.005: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:55.006: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:55.006: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:55.006: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:55.007: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:55.007: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:55.008: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:56.963: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:56.969: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:56.969: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:57.003: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:57.003: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:57.003: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:57.004: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:57.005: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:57.005: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:57.006: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:58.965: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:58.966: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:58.966: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:59.004: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:59.004: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:59.005: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:59.005: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:59.006: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:59.006: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:42:59.006: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:00.963: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:00.967: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:00.967: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:01.004: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:01.004: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:01.005: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:01.005: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:01.007: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:01.007: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:01.007: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:02.962: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:02.967: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:02.967: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:03.003: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:03.003: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:03.005: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:03.005: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:03.006: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:03.007: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:03.007: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:04.963: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:04.966: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:04.966: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:05.006: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:05.007: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:05.007: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:05.008: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:05.008: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:05.008: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:05.009: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:06.963: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:06.966: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:06.966: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:07.010: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:07.011: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:07.012: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:07.012: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:07.013: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:07.013: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:07.013: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:08.968: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:08.968: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:08.970: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:09.003: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:09.003: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:09.004: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:09.004: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:09.005: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:09.005: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:09.006: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:10.963: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:10.967: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:10.967: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:11.003: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:11.004: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:11.005: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:11.005: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:11.005: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:11.005: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:11.005: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:12.963: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:12.966: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:12.967: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:13.004: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:13.005: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:13.006: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:13.007: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:13.007: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:13.007: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:13.007: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:14.964: INFO: The status of Pod statscollectiontest-c2dea3b7-a005-41ff-b182-deea37e3e317-4 is Running (Ready = true) Jan 22 23:43:14.966: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:14.967: INFO: The status of Pod statscollectiontest-300cab2f-a4fb-432f-a4c6-393354acc76e-9 is Running (Ready = true) Jan 22 23:43:15.009: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:15.009: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:15.010: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:15.010: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:15.011: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:15.016: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:15.016: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:16.967: INFO: The status of Pod statscollectiontest-ae7435cc-7490-4a35-adf0-5ff72084a714-2 is Running (Ready = true) Jan 22 23:43:17.004: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:17.004: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:17.005: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:17.005: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:17.006: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:17.006: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:17.007: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:19.006: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:19.006: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:19.007: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:19.007: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:19.007: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:19.008: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:19.008: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:21.006: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:21.007: INFO: The status of Pod statscollectiontest-8b313bc3-2c83-438b-84d9-042ec6f4b1e1-5 is Running (Ready = true) Jan 22 23:43:21.007: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:21.007: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:21.008: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:21.008: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:21.009: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:23.006: INFO: The status of Pod statscollectiontest-4c68a85b-b366-4a43-8509-5adcfbe34578-6 is Running (Ready = true) Jan 22 23:43:23.006: INFO: The status of Pod statscollectiontest-bd61fbf4-966d-4754-b1d8-5a60ab5a6e99-1 is Running (Ready = true) Jan 22 23:43:23.006: INFO: The status of Pod statscollectiontest-080cbf39-6a3c-42d7-82ba-8ed5a34ac81e-7 is Running (Ready = true) Jan 22 23:43:23.006: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:23.007: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:23.008: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:25.005: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:25.005: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:25.006: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:43:27.004: INFO: The status of Pod statscollectiontest-a5dd00bb-9609-4ffd-87ff-ca23c8524379-3 is Running (Ready = true) Jan 22 23:43:27.004: INFO: The status of Pod statscollectiontest-bf22f600-5e5b-4105-871c-e234fe9f40c6-8 is Running (Ready = true) Jan 22 23:43:27.004: INFO: The status of Pod statscollectiontest-1bde5a05-2843-41b3-a24c-a8aa0e7cbf11-0 is Running (Ready = true) �[1mSTEP�[0m: Waiting up to 3 minutes for pods to be running Jan 22 23:43:27.039: INFO: Waiting up to 3m0s for all pods (need at least 10) in namespace 'kubelet-stats-test-windows-serial-3795' to be running and ready Jan 22 23:43:27.143: INFO: 10 / 10 pods in namespace 'kubelet-stats-test-windows-serial-3795' are running and ready (0 seconds elapsed) Jan 22 23:43:27.143: INFO: expected 0 pod replicas in namespace 'kubelet-stats-test-windows-serial-3795', 0 are Running and Ready. �[1mSTEP�[0m: Getting kubelet stats 5 times and checking average duration Jan 22 23:43:53.815: INFO: Getting kubelet stats for node capz-conf-2xrmj took an average of 332 milliseconds over 5 iterations [AfterEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:188 Jan 22 23:43:53.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-stats-test-windows-serial-3795" for this suite. �[32m•�[0m{"msg":"PASSED [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","total":61,"completed":10,"skipped":1139,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir wrapper volumes�[0m �[1mshould not cause race condition when used for configmaps [Serial] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:43:53.890: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating 50 configmaps �[1mSTEP�[0m: Creating RC which spawns configmap-volume pods Jan 22 23:43:56.045: INFO: Pod name wrapped-volume-race-f4361d21-8243-4d45-9706-72182ba81eff: Found 3 pods out of 5 Jan 22 23:44:01.090: INFO: Pod name wrapped-volume-race-f4361d21-8243-4d45-9706-72182ba81eff: Found 5 pods out of 5 �[1mSTEP�[0m: Ensuring each pod is running �[1mSTEP�[0m: deleting ReplicationController wrapped-volume-race-f4361d21-8243-4d45-9706-72182ba81eff in namespace emptydir-wrapper-8896, will wait for the garbage collector to delete the pods Jan 22 23:44:17.505: INFO: Deleting ReplicationController wrapped-volume-race-f4361d21-8243-4d45-9706-72182ba81eff took: 60.593242ms Jan 22 23:44:17.605: INFO: Terminating ReplicationController wrapped-volume-race-f4361d21-8243-4d45-9706-72182ba81eff pods took: 100.275782ms �[1mSTEP�[0m: Creating RC which spawns configmap-volume pods Jan 22 23:44:21.336: INFO: Pod name wrapped-volume-race-06b30ce0-60ea-43e9-9d12-9ce3c20fc4a3: Found 3 pods out of 5 Jan 22 23:44:26.381: INFO: Pod name wrapped-volume-race-06b30ce0-60ea-43e9-9d12-9ce3c20fc4a3: Found 5 pods out of 5 �[1mSTEP�[0m: Ensuring each pod is running �[1mSTEP�[0m: deleting ReplicationController wrapped-volume-race-06b30ce0-60ea-43e9-9d12-9ce3c20fc4a3 in namespace emptydir-wrapper-8896, will wait for the garbage collector to delete the pods Jan 22 23:44:40.752: INFO: Deleting ReplicationController wrapped-volume-race-06b30ce0-60ea-43e9-9d12-9ce3c20fc4a3 took: 49.271298ms Jan 22 23:44:40.853: INFO: Terminating ReplicationController wrapped-volume-race-06b30ce0-60ea-43e9-9d12-9ce3c20fc4a3 pods took: 100.198162ms �[1mSTEP�[0m: Creating RC which spawns configmap-volume pods Jan 22 23:44:45.480: INFO: Pod name wrapped-volume-race-780045ff-9e40-4ff6-a60e-d860b887c7b9: Found 3 pods out of 5 Jan 22 23:44:50.529: INFO: Pod name wrapped-volume-race-780045ff-9e40-4ff6-a60e-d860b887c7b9: Found 5 pods out of 5 �[1mSTEP�[0m: Ensuring each pod is running �[1mSTEP�[0m: deleting ReplicationController wrapped-volume-race-780045ff-9e40-4ff6-a60e-d860b887c7b9 in namespace emptydir-wrapper-8896, will wait for the garbage collector to delete the pods Jan 22 23:45:06.897: INFO: Deleting ReplicationController wrapped-volume-race-780045ff-9e40-4ff6-a60e-d860b887c7b9 took: 49.467848ms Jan 22 23:45:06.998: INFO: Terminating ReplicationController wrapped-volume-race-780045ff-9e40-4ff6-a60e-d860b887c7b9 pods took: 101.194022ms �[1mSTEP�[0m: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:188 Jan 22 23:45:12.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-8896" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":61,"completed":11,"skipped":1149,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior)�[0m �[90mwith short downscale stabilization window�[0m �[1mshould scale down soon after the stabilization period�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:34�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:45:13.060: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should scale down soon after the stabilization period test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:34 �[1mSTEP�[0m: setting up resource consumer and HPA �[1mSTEP�[0m: Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[1mSTEP�[0m: creating deployment consumer in namespace horizontal-pod-autoscaling-6202 I0122 23:45:13.377311 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-6202, replica count: 1 �[1mSTEP�[0m: Running controller I0122 23:45:23.429485 14 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-6202 I0122 23:45:23.515650 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-6202, replica count: 1 I0122 23:45:33.566377 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 22 23:45:38.566: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Jan 22 23:45:38.599: INFO: RC consumer: consume 110 millicores in total Jan 22 23:45:38.599: INFO: RC consumer: setting consumption to 110 millicores in total Jan 22 23:45:38.599: INFO: RC consumer: sending request to consume 110 millicores Jan 22 23:45:38.599: INFO: RC consumer: sending request to consume 0 MB Jan 22 23:45:38.599: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:45:38.599: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Jan 22 23:45:38.599: INFO: RC consumer: consume 0 MB in total Jan 22 23:45:38.635: INFO: RC consumer: setting consumption to 0 MB in total Jan 22 23:45:38.635: INFO: RC consumer: consume custom metric 0 in total Jan 22 23:45:38.635: INFO: RC consumer: setting bump of metric QPS to 0 in total Jan 22 23:45:38.635: INFO: RC consumer: sending request to consume 0 of custom metric QPS Jan 22 23:45:38.635: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } �[1mSTEP�[0m: triggering scale up to record a recommendation Jan 22 23:45:38.671: INFO: RC consumer: consume 330 millicores in total Jan 22 23:45:38.671: INFO: RC consumer: setting consumption to 330 millicores in total Jan 22 23:45:38.703: INFO: waiting for 3 replicas (current: 1) Jan 22 23:45:58.737: INFO: waiting for 3 replicas (current: 1) Jan 22 23:46:08.635: INFO: RC consumer: sending request to consume 0 MB Jan 22 23:46:08.635: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:46:08.671: INFO: RC consumer: sending request to consume 330 millicores Jan 22 23:46:08.671: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 22 23:46:08.671: INFO: RC consumer: sending request to consume 0 of custom metric QPS Jan 22 23:46:08.672: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:46:18.738: INFO: waiting for 3 replicas (current: 1) Jan 22 23:46:38.672: INFO: RC consumer: sending request to consume 0 MB Jan 22 23:46:38.672: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:46:38.714: INFO: RC consumer: sending request to consume 330 millicores Jan 22 23:46:38.714: INFO: RC consumer: sending request to consume 0 of custom metric QPS Jan 22 23:46:38.715: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Jan 22 23:46:38.715: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:46:38.738: INFO: waiting for 3 replicas (current: 1) Jan 22 23:46:58.739: INFO: waiting for 3 replicas (current: 3) �[1mSTEP�[0m: triggering scale down by lowering consumption Jan 22 23:46:58.739: INFO: RC consumer: consume 220 millicores in total Jan 22 23:46:58.739: INFO: RC consumer: setting consumption to 220 millicores in total Jan 22 23:46:58.772: INFO: waiting for 2 replicas (current: 3) Jan 22 23:47:08.708: INFO: RC consumer: sending request to consume 0 MB Jan 22 23:47:08.708: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:47:08.756: INFO: RC consumer: sending request to consume 0 of custom metric QPS Jan 22 23:47:08.756: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:47:08.756: INFO: RC consumer: sending request to consume 220 millicores Jan 22 23:47:08.757: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Jan 22 23:47:18.809: INFO: waiting for 2 replicas (current: 3) Jan 22 23:47:38.744: INFO: RC consumer: sending request to consume 0 MB Jan 22 23:47:38.744: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:47:38.792: INFO: RC consumer: sending request to consume 0 of custom metric QPS Jan 22 23:47:38.792: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:47:38.807: INFO: waiting for 2 replicas (current: 3) Jan 22 23:47:38.811: INFO: RC consumer: sending request to consume 220 millicores Jan 22 23:47:38.811: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Jan 22 23:47:58.808: INFO: waiting for 2 replicas (current: 3) Jan 22 23:48:08.780: INFO: RC consumer: sending request to consume 0 MB Jan 22 23:48:08.780: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:48:08.827: INFO: RC consumer: sending request to consume 0 of custom metric QPS Jan 22 23:48:08.827: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:48:08.851: INFO: RC consumer: sending request to consume 220 millicores Jan 22 23:48:08.852: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=220&requestSizeMillicores=100 } Jan 22 23:48:18.808: INFO: waiting for 2 replicas (current: 3) Jan 22 23:48:38.806: INFO: waiting for 2 replicas (current: 2) �[1mSTEP�[0m: verifying time waited for a scale down Jan 22 23:48:38.806: INFO: time waited for scale down: 1m40.067236552s Jan 22 23:48:38.816: INFO: RC consumer: sending request to consume 0 MB Jan 22 23:48:38.816: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6202/services/consumer-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } �[1mSTEP�[0m: Removing consuming RC consumer Jan 22 23:48:38.842: INFO: RC consumer: stopping metric consumer Jan 22 23:48:38.842: INFO: RC consumer: stopping CPU consumer Jan 22 23:48:38.851: INFO: RC consumer: stopping mem consumer �[1mSTEP�[0m: deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-6202, will wait for the garbage collector to delete the pods Jan 22 23:48:48.973: INFO: Deleting Deployment.apps consumer took: 36.932111ms Jan 22 23:48:49.074: INFO: Terminating Deployment.apps consumer pods took: 100.390861ms �[1mSTEP�[0m: deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-6202, will wait for the garbage collector to delete the pods Jan 22 23:48:51.165: INFO: Deleting ReplicationController consumer-ctrl took: 36.883037ms Jan 22 23:48:51.266: INFO: Terminating ReplicationController consumer-ctrl pods took: 101.070101ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/framework.go:188 Jan 22 23:48:53.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-6202" for this suite. �[32m• [SLOW TEST:220.045 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[90mtest/e2e/autoscaling/framework.go:23�[0m with short downscale stabilization window �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:33�[0m should scale down soon after the stabilization period �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:34�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period","total":61,"completed":12,"skipped":1224,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould retry creating failed daemon pods [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:48:53.107: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should retry creating failed daemon pods [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a simple DaemonSet "daemon-set" �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 22 23:48:53.561: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:53.594: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:48:53.594: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:48:54.631: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:54.666: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:48:54.666: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:48:55.630: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:55.664: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:48:55.664: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:48:56.631: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:56.664: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:48:56.664: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:48:57.633: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:57.669: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:48:57.669: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:48:58.630: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:58.664: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 22 23:48:58.665: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP�[0m: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 22 23:48:58.813: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:58.846: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:48:58.846: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:48:59.884: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:59.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:48:59.918: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:49:00.884: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:00.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:49:00.918: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:49:01.884: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:01.917: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:49:01.918: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:49:02.885: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:02.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:49:02.918: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:49:03.884: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:03.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 22 23:49:03.918: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP�[0m: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1609, will wait for the garbage collector to delete the pods Jan 22 23:49:04.103: INFO: Deleting DaemonSet.extensions daemon-set took: 36.334354ms Jan 22 23:49:04.204: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.652131ms Jan 22 23:49:09.237: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:49:09.237: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 22 23:49:09.270: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"9130"},"items":null} Jan 22 23:49:09.303: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"9131"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:49:09.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-1609" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":61,"completed":13,"skipped":1240,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Density [Serial] [Slow]�[0m �[90mcreate a batch of pods�[0m �[1mlatency/resource should be within limit when create 10 pods with 0s interval�[0m �[37mtest/e2e/windows/density.go:68�[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:49:09.482: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename density-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] latency/resource should be within limit when create 10 pods with 0s interval test/e2e/windows/density.go:68 �[1mSTEP�[0m: Creating a batch of pods �[1mSTEP�[0m: Waiting for all Pods to be observed by the watch... Jan 22 23:49:19.755: INFO: Waiting for pod test-92b5f63a-62f6-4af3-ac25-fd25d1150b5a to disappear Jan 22 23:49:19.761: INFO: Waiting for pod test-b3642056-aaff-48a3-af77-8d595572a39f to disappear Jan 22 23:49:19.761: INFO: Waiting for pod test-acc54ef1-af8b-411a-a71d-2241e2483a05 to disappear Jan 22 23:49:19.761: INFO: Waiting for pod test-485e7a86-bf77-4f20-8ffd-d397648ddcfa to disappear Jan 22 23:49:19.762: INFO: Waiting for pod test-683aebeb-6eec-4e5e-a3ef-c7f3bd9bb02e to disappear Jan 22 23:49:19.788: INFO: Waiting for pod test-b8a9eb20-01fc-4e8f-83f3-e5f5e27eeb3a to disappear Jan 22 23:49:19.789: INFO: Waiting for pod test-8ec541b9-1d93-4445-9855-8e1b899b0199 to disappear Jan 22 23:49:19.793: INFO: Waiting for pod test-44d18de4-235b-4eff-96bb-f42561a8aa98 to disappear Jan 22 23:49:19.793: INFO: Pod test-92b5f63a-62f6-4af3-ac25-fd25d1150b5a still exists Jan 22 23:49:19.793: INFO: Waiting for pod test-3e55e96b-bdc1-44dd-9954-fb5543c55788 to disappear Jan 22 23:49:19.794: INFO: Waiting for pod test-ec417beb-eecb-40d1-b48d-35ba278bd923 to disappear Jan 22 23:49:19.800: INFO: Pod test-683aebeb-6eec-4e5e-a3ef-c7f3bd9bb02e still exists Jan 22 23:49:19.803: INFO: Pod test-acc54ef1-af8b-411a-a71d-2241e2483a05 still exists Jan 22 23:49:19.807: INFO: Pod test-485e7a86-bf77-4f20-8ffd-d397648ddcfa still exists Jan 22 23:49:19.810: INFO: Pod test-b3642056-aaff-48a3-af77-8d595572a39f still exists Jan 22 23:49:19.823: INFO: Pod test-b8a9eb20-01fc-4e8f-83f3-e5f5e27eeb3a still exists Jan 22 23:49:19.827: INFO: Pod test-8ec541b9-1d93-4445-9855-8e1b899b0199 still exists Jan 22 23:49:19.830: INFO: Pod test-44d18de4-235b-4eff-96bb-f42561a8aa98 still exists Jan 22 23:49:19.834: INFO: Pod test-ec417beb-eecb-40d1-b48d-35ba278bd923 still exists Jan 22 23:49:19.837: INFO: Pod test-3e55e96b-bdc1-44dd-9954-fb5543c55788 still exists Jan 22 23:49:49.796: INFO: Waiting for pod test-92b5f63a-62f6-4af3-ac25-fd25d1150b5a to disappear Jan 22 23:49:49.800: INFO: Waiting for pod test-683aebeb-6eec-4e5e-a3ef-c7f3bd9bb02e to disappear Jan 22 23:49:49.803: INFO: Waiting for pod test-acc54ef1-af8b-411a-a71d-2241e2483a05 to disappear Jan 22 23:49:49.808: INFO: Waiting for pod test-485e7a86-bf77-4f20-8ffd-d397648ddcfa to disappear Jan 22 23:49:49.811: INFO: Waiting for pod test-b3642056-aaff-48a3-af77-8d595572a39f to disappear Jan 22 23:49:49.824: INFO: Waiting for pod test-b8a9eb20-01fc-4e8f-83f3-e5f5e27eeb3a to disappear Jan 22 23:49:49.828: INFO: Waiting for pod test-8ec541b9-1d93-4445-9855-8e1b899b0199 to disappear Jan 22 23:49:49.829: INFO: Pod test-92b5f63a-62f6-4af3-ac25-fd25d1150b5a no longer exists Jan 22 23:49:49.831: INFO: Waiting for pod test-44d18de4-235b-4eff-96bb-f42561a8aa98 to disappear Jan 22 23:49:49.833: INFO: Pod test-683aebeb-6eec-4e5e-a3ef-c7f3bd9bb02e no longer exists Jan 22 23:49:49.835: INFO: Waiting for pod test-ec417beb-eecb-40d1-b48d-35ba278bd923 to disappear Jan 22 23:49:49.836: INFO: Pod test-acc54ef1-af8b-411a-a71d-2241e2483a05 no longer exists Jan 22 23:49:49.838: INFO: Waiting for pod test-3e55e96b-bdc1-44dd-9954-fb5543c55788 to disappear Jan 22 23:49:49.840: INFO: Pod test-485e7a86-bf77-4f20-8ffd-d397648ddcfa no longer exists Jan 22 23:49:49.844: INFO: Pod test-b3642056-aaff-48a3-af77-8d595572a39f no longer exists Jan 22 23:49:49.856: INFO: Pod test-b8a9eb20-01fc-4e8f-83f3-e5f5e27eeb3a no longer exists Jan 22 23:49:49.862: INFO: Pod test-8ec541b9-1d93-4445-9855-8e1b899b0199 no longer exists Jan 22 23:49:49.863: INFO: Pod test-44d18de4-235b-4eff-96bb-f42561a8aa98 no longer exists Jan 22 23:49:49.868: INFO: Pod test-ec417beb-eecb-40d1-b48d-35ba278bd923 no longer exists Jan 22 23:49:49.870: INFO: Pod test-3e55e96b-bdc1-44dd-9954-fb5543c55788 no longer exists [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 22 23:49:49.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "density-test-windows-4549" for this suite. �[32m•�[0m{"msg":"PASSED [sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","total":61,"completed":14,"skipped":1314,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould rollback without unnecessary restarts [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:49:49.956: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should rollback without unnecessary restarts [Conformance] test/e2e/framework/framework.go:652 Jan 22 23:49:50.359: INFO: Create a RollingUpdate DaemonSet Jan 22 23:49:50.398: INFO: Check that daemon pods launch on every node of the cluster Jan 22 23:49:50.438: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:50.471: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:49:50.471: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:49:51.506: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:51.539: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:49:51.539: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:49:52.507: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:52.541: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:49:52.541: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:49:53.507: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:53.540: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:49:53.540: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:49:54.506: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:54.541: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:49:54.541: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:49:55.508: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:55.542: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 22 23:49:55.542: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set Jan 22 23:49:55.542: INFO: Update the DaemonSet to trigger a rollout Jan 22 23:49:55.622: INFO: Updating DaemonSet daemon-set Jan 22 23:50:01.768: INFO: Roll back the DaemonSet before rollout is complete Jan 22 23:50:01.840: INFO: Updating DaemonSet daemon-set Jan 22 23:50:01.840: INFO: Make sure DaemonSet rollback is complete Jan 22 23:50:01.908: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:50:02.977: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:50:03.977: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:50:04.976: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:50:05.942: INFO: Pod daemon-set-28pcc is not available Jan 22 23:50:05.980: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5489, will wait for the garbage collector to delete the pods Jan 22 23:50:06.167: INFO: Deleting DaemonSet.extensions daemon-set took: 36.467765ms Jan 22 23:50:06.267: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.460852ms Jan 22 23:50:15.601: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:50:15.601: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 22 23:50:15.634: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"9712"},"items":null} Jan 22 23:50:15.666: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"9712"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:50:15.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-5489" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":61,"completed":15,"skipped":1328,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:50:15.845: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Jan 22 23:52:16.806: INFO: Successfully updated pod "var-expansion-4dba3e13-e06f-4d89-980f-552c75326e8b" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 22 23:52:28.874: INFO: Deleting pod "var-expansion-4dba3e13-e06f-4d89-980f-552c75326e8b" in namespace "var-expansion-7880" Jan 22 23:52:28.916: INFO: Wait up to 5m0s for pod "var-expansion-4dba3e13-e06f-4d89-980f-552c75326e8b" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 22 23:52:34.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-7880" for this suite. �[32m• [SLOW TEST:139.215 seconds]�[0m [sig-node] Variable Expansion �[90mtest/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":61,"completed":16,"skipped":1618,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould update pod when spec was updated and update strategy is RollingUpdate [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:52:35.072: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/framework/framework.go:652 Jan 22 23:52:35.430: INFO: Creating simple daemon set daemon-set �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 22 23:52:35.523: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:35.557: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:52:35.557: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:52:36.592: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:36.626: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:52:36.626: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:52:37.594: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:37.627: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:52:37.627: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:52:38.593: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:38.626: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:52:38.626: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:52:39.592: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:39.625: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:52:39.625: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:52:40.592: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:40.625: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 22 23:52:40.625: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP�[0m: Update daemon pods image. �[1mSTEP�[0m: Check that daemon pods images are updated. Jan 22 23:52:40.862: INFO: Wrong image for pod: daemon-set-7s2vz. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 22 23:52:40.896: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:41.930: INFO: Wrong image for pod: daemon-set-7s2vz. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 22 23:52:41.964: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:42.930: INFO: Wrong image for pod: daemon-set-7s2vz. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 22 23:52:42.965: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:43.930: INFO: Wrong image for pod: daemon-set-7s2vz. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 22 23:52:43.965: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:44.931: INFO: Wrong image for pod: daemon-set-7s2vz. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 22 23:52:44.965: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:45.933: INFO: Wrong image for pod: daemon-set-7s2vz. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 22 23:52:45.934: INFO: Pod daemon-set-cp7pj is not available Jan 22 23:52:45.967: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:46.931: INFO: Wrong image for pod: daemon-set-7s2vz. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 22 23:52:46.931: INFO: Pod daemon-set-cp7pj is not available Jan 22 23:52:46.966: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:47.936: INFO: Wrong image for pod: daemon-set-7s2vz. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 22 23:52:47.936: INFO: Pod daemon-set-cp7pj is not available Jan 22 23:52:47.970: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:48.930: INFO: Wrong image for pod: daemon-set-7s2vz. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Jan 22 23:52:48.930: INFO: Pod daemon-set-cp7pj is not available Jan 22 23:52:48.965: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:49.965: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:50.965: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:51.968: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:52.971: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:53.931: INFO: Pod daemon-set-sr5qz is not available Jan 22 23:52:53.967: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node �[1mSTEP�[0m: Check that daemon pods are still running on every node of the cluster. Jan 22 23:52:54.001: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:54.034: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:52:54.034: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:52:55.070: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:55.102: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:52:55.102: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:52:56.076: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:56.110: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:52:56.110: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:52:57.070: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:57.103: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:52:57.103: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:52:58.070: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:52:58.103: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 22 23:52:58.103: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1413, will wait for the garbage collector to delete the pods Jan 22 23:52:58.391: INFO: Deleting DaemonSet.extensions daemon-set took: 36.705411ms Jan 22 23:52:58.491: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.173889ms Jan 22 23:53:02.226: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:53:02.226: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 22 23:53:02.259: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"10383"},"items":null} Jan 22 23:53:02.292: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"10383"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:53:02.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-1413" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":61,"completed":17,"skipped":1898,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould fail substituting values in a volume subpath with absolute path [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:53:02.467: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/framework/framework.go:652 Jan 22 23:53:06.805: INFO: Deleting pod "var-expansion-13059b14-b127-49ee-a5f5-8d890851903b" in namespace "var-expansion-355" Jan 22 23:53:06.842: INFO: Wait up to 5m0s for pod "var-expansion-13059b14-b127-49ee-a5f5-8d890851903b" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 22 23:53:10.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-355" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":61,"completed":18,"skipped":2023,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] Deployment�[0m �[1mShould scale from 1 pod to 3 pods and from 3 to 5�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:40�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:53:10.982: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 1 pod to 3 pods and from 3 to 5 test/e2e/autoscaling/horizontal_pod_autoscaling.go:40 �[1mSTEP�[0m: Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[1mSTEP�[0m: creating deployment test-deployment in namespace horizontal-pod-autoscaling-1629 I0122 23:53:11.306480 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-1629, replica count: 1 I0122 23:53:21.358414 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: Running controller �[1mSTEP�[0m: creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-1629 I0122 23:53:21.450417 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-1629, replica count: 1 I0122 23:53:31.501660 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 22 23:53:36.502: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Jan 22 23:53:36.536: INFO: RC test-deployment: consume 250 millicores in total Jan 22 23:53:36.536: INFO: RC test-deployment: setting consumption to 250 millicores in total Jan 22 23:53:36.536: INFO: RC test-deployment: sending request to consume 250 millicores Jan 22 23:53:36.536: INFO: RC test-deployment: sending request to consume 0 MB Jan 22 23:53:36.537: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:53:36.536: INFO: RC test-deployment: consume 0 MB in total Jan 22 23:53:36.536: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 22 23:53:36.606: INFO: RC test-deployment: setting consumption to 0 MB in total Jan 22 23:53:36.606: INFO: RC test-deployment: consume custom metric 0 in total Jan 22 23:53:36.606: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 22 23:53:36.606: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:53:36.641: INFO: RC test-deployment: setting bump of metric QPS to 0 in total Jan 22 23:53:36.712: INFO: waiting for 3 replicas (current: 1) Jan 22 23:53:56.747: INFO: waiting for 3 replicas (current: 1) Jan 22 23:54:06.606: INFO: RC test-deployment: sending request to consume 0 MB Jan 22 23:54:06.606: INFO: RC test-deployment: sending request to consume 250 millicores Jan 22 23:54:06.606: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:54:06.606: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 22 23:54:06.641: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 22 23:54:06.641: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:54:16.746: INFO: waiting for 3 replicas (current: 3) Jan 22 23:54:16.746: INFO: RC test-deployment: consume 700 millicores in total Jan 22 23:54:16.746: INFO: RC test-deployment: setting consumption to 700 millicores in total Jan 22 23:54:16.778: INFO: waiting for 5 replicas (current: 3) Jan 22 23:54:36.641: INFO: RC test-deployment: sending request to consume 0 MB Jan 22 23:54:36.642: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:54:36.649: INFO: RC test-deployment: sending request to consume 700 millicores Jan 22 23:54:36.649: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 22 23:54:36.676: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 22 23:54:36.676: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:54:36.812: INFO: waiting for 5 replicas (current: 3) Jan 22 23:54:56.812: INFO: waiting for 5 replicas (current: 4) Jan 22 23:55:06.676: INFO: RC test-deployment: sending request to consume 0 MB Jan 22 23:55:06.676: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:55:09.696: INFO: RC test-deployment: sending request to consume 700 millicores Jan 22 23:55:09.696: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 22 23:55:09.696: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 22 23:55:09.696: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1629/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:55:16.813: INFO: waiting for 5 replicas (current: 5) �[1mSTEP�[0m: Removing consuming RC test-deployment Jan 22 23:55:16.849: INFO: RC test-deployment: stopping metric consumer Jan 22 23:55:16.849: INFO: RC test-deployment: stopping CPU consumer Jan 22 23:55:16.850: INFO: RC test-deployment: stopping mem consumer �[1mSTEP�[0m: deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-1629, will wait for the garbage collector to delete the pods Jan 22 23:55:26.974: INFO: Deleting Deployment.apps test-deployment took: 37.799466ms Jan 22 23:55:27.074: INFO: Terminating Deployment.apps test-deployment pods took: 100.864925ms �[1mSTEP�[0m: deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-1629, will wait for the garbage collector to delete the pods Jan 22 23:55:29.474: INFO: Deleting ReplicationController test-deployment-ctrl took: 37.685506ms Jan 22 23:55:29.575: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 101.142188ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 22 23:55:31.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-1629" for this suite. �[32m• [SLOW TEST:140.717 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:38�[0m Should scale from 1 pod to 3 pods and from 3 to 5 �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:40�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5","total":61,"completed":19,"skipped":2117,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan pods created by rc if delete options say so [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:55:31.702: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Jan 22 23:56:12.298: INFO: The status of Pod kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj is Running (Ready = true) Jan 22 23:56:12.648: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 22 23:56:12.649: INFO: Deleting pod "simpletest.rc-26d84" in namespace "gc-7044" Jan 22 23:56:12.694: INFO: Deleting pod "simpletest.rc-2kdxn" in namespace "gc-7044" Jan 22 23:56:12.742: INFO: Deleting pod "simpletest.rc-2mg7n" in namespace "gc-7044" Jan 22 23:56:12.784: INFO: Deleting pod "simpletest.rc-2sgsd" in namespace "gc-7044" Jan 22 23:56:12.824: INFO: Deleting pod "simpletest.rc-2tc5b" in namespace "gc-7044" Jan 22 23:56:12.870: INFO: Deleting pod "simpletest.rc-2thgt" in namespace "gc-7044" Jan 22 23:56:12.911: INFO: Deleting pod "simpletest.rc-2wp5m" in namespace "gc-7044" Jan 22 23:56:12.956: INFO: Deleting pod "simpletest.rc-44dmt" in namespace "gc-7044" Jan 22 23:56:13.001: INFO: Deleting pod "simpletest.rc-49967" in namespace "gc-7044" Jan 22 23:56:13.044: INFO: Deleting pod "simpletest.rc-4fk9g" in namespace "gc-7044" Jan 22 23:56:13.092: INFO: Deleting pod "simpletest.rc-4frvd" in namespace "gc-7044" Jan 22 23:56:13.140: INFO: Deleting pod "simpletest.rc-4qplk" in namespace "gc-7044" Jan 22 23:56:13.180: INFO: Deleting pod "simpletest.rc-4qx7n" in namespace "gc-7044" Jan 22 23:56:13.225: INFO: Deleting pod "simpletest.rc-4r8jx" in namespace "gc-7044" Jan 22 23:56:13.267: INFO: Deleting pod "simpletest.rc-5jjks" in namespace "gc-7044" Jan 22 23:56:13.317: INFO: Deleting pod "simpletest.rc-5kpk8" in namespace "gc-7044" Jan 22 23:56:13.364: INFO: Deleting pod "simpletest.rc-5lv4j" in namespace "gc-7044" Jan 22 23:56:13.411: INFO: Deleting pod "simpletest.rc-5m2dw" in namespace "gc-7044" Jan 22 23:56:13.455: INFO: Deleting pod "simpletest.rc-5qwp9" in namespace "gc-7044" Jan 22 23:56:13.501: INFO: Deleting pod "simpletest.rc-65xfg" in namespace "gc-7044" Jan 22 23:56:13.545: INFO: Deleting pod "simpletest.rc-6965w" in namespace "gc-7044" Jan 22 23:56:13.587: INFO: Deleting pod "simpletest.rc-69llx" in namespace "gc-7044" Jan 22 23:56:13.628: INFO: Deleting pod "simpletest.rc-6fghd" in namespace "gc-7044" Jan 22 23:56:13.672: INFO: Deleting pod "simpletest.rc-6v8vh" in namespace "gc-7044" Jan 22 23:56:13.715: INFO: Deleting pod "simpletest.rc-6w7ck" in namespace "gc-7044" Jan 22 23:56:13.758: INFO: Deleting pod "simpletest.rc-7v8q2" in namespace "gc-7044" Jan 22 23:56:13.803: INFO: Deleting pod "simpletest.rc-7vlfn" in namespace "gc-7044" Jan 22 23:56:13.845: INFO: Deleting pod "simpletest.rc-85v48" in namespace "gc-7044" Jan 22 23:56:13.886: INFO: Deleting pod "simpletest.rc-8knr4" in namespace "gc-7044" Jan 22 23:56:13.928: INFO: Deleting pod "simpletest.rc-8xmg6" in namespace "gc-7044" Jan 22 23:56:13.982: INFO: Deleting pod "simpletest.rc-954tl" in namespace "gc-7044" Jan 22 23:56:14.028: INFO: Deleting pod "simpletest.rc-98b4z" in namespace "gc-7044" Jan 22 23:56:14.071: INFO: Deleting pod "simpletest.rc-9fggr" in namespace "gc-7044" Jan 22 23:56:14.116: INFO: Deleting pod "simpletest.rc-9m4ct" in namespace "gc-7044" Jan 22 23:56:14.160: INFO: Deleting pod "simpletest.rc-9w92z" in namespace "gc-7044" Jan 22 23:56:14.210: INFO: Deleting pod "simpletest.rc-b7kjp" in namespace "gc-7044" Jan 22 23:56:14.252: INFO: Deleting pod "simpletest.rc-bkdjg" in namespace "gc-7044" Jan 22 23:56:14.294: INFO: Deleting pod "simpletest.rc-bq2hk" in namespace "gc-7044" Jan 22 23:56:14.337: INFO: Deleting pod "simpletest.rc-cpqf8" in namespace "gc-7044" Jan 22 23:56:14.378: INFO: Deleting pod "simpletest.rc-d795s" in namespace "gc-7044" Jan 22 23:56:14.427: INFO: Deleting pod "simpletest.rc-d84fg" in namespace "gc-7044" Jan 22 23:56:14.475: INFO: Deleting pod "simpletest.rc-db26z" in namespace "gc-7044" Jan 22 23:56:14.521: INFO: Deleting pod "simpletest.rc-db7nx" in namespace "gc-7044" Jan 22 23:56:14.569: INFO: Deleting pod "simpletest.rc-dvrdk" in namespace "gc-7044" Jan 22 23:56:14.610: INFO: Deleting pod "simpletest.rc-dvw2v" in namespace "gc-7044" Jan 22 23:56:14.654: INFO: Deleting pod "simpletest.rc-dzbhn" in namespace "gc-7044" Jan 22 23:56:14.698: INFO: Deleting pod "simpletest.rc-f6rvt" in namespace "gc-7044" Jan 22 23:56:14.743: INFO: Deleting pod "simpletest.rc-f7m7s" in namespace "gc-7044" Jan 22 23:56:14.789: INFO: Deleting pod "simpletest.rc-fbk84" in namespace "gc-7044" Jan 22 23:56:14.834: INFO: Deleting pod "simpletest.rc-fdlsw" in namespace "gc-7044" Jan 22 23:56:14.880: INFO: Deleting pod "simpletest.rc-fkkj5" in namespace "gc-7044" Jan 22 23:56:14.925: INFO: Deleting pod "simpletest.rc-flmwq" in namespace "gc-7044" Jan 22 23:56:14.970: INFO: Deleting pod "simpletest.rc-fqfvx" in namespace "gc-7044" Jan 22 23:56:15.015: INFO: Deleting pod "simpletest.rc-g55x5" in namespace "gc-7044" Jan 22 23:56:15.060: INFO: Deleting pod "simpletest.rc-gjp65" in namespace "gc-7044" Jan 22 23:56:15.106: INFO: Deleting pod "simpletest.rc-gjqw8" in namespace "gc-7044" Jan 22 23:56:15.151: INFO: Deleting pod "simpletest.rc-gx579" in namespace "gc-7044" Jan 22 23:56:15.201: INFO: Deleting pod "simpletest.rc-h297h" in namespace "gc-7044" Jan 22 23:56:15.247: INFO: Deleting pod "simpletest.rc-h8298" in namespace "gc-7044" Jan 22 23:56:15.291: INFO: Deleting pod "simpletest.rc-h9zqc" in namespace "gc-7044" Jan 22 23:56:15.339: INFO: Deleting pod "simpletest.rc-hxdtg" in namespace "gc-7044" Jan 22 23:56:15.383: INFO: Deleting pod "simpletest.rc-j6dqh" in namespace "gc-7044" Jan 22 23:56:15.427: INFO: Deleting pod "simpletest.rc-j6ftb" in namespace "gc-7044" Jan 22 23:56:15.471: INFO: Deleting pod "simpletest.rc-jsv8z" in namespace "gc-7044" Jan 22 23:56:15.519: INFO: Deleting pod "simpletest.rc-jvtts" in namespace "gc-7044" Jan 22 23:56:15.562: INFO: Deleting pod "simpletest.rc-kd8sd" in namespace "gc-7044" Jan 22 23:56:15.607: INFO: Deleting pod "simpletest.rc-lqlts" in namespace "gc-7044" Jan 22 23:56:15.652: INFO: Deleting pod "simpletest.rc-lw644" in namespace "gc-7044" Jan 22 23:56:15.696: INFO: Deleting pod "simpletest.rc-lwzbg" in namespace "gc-7044" Jan 22 23:56:15.739: INFO: Deleting pod "simpletest.rc-m28vq" in namespace "gc-7044" Jan 22 23:56:15.785: INFO: Deleting pod "simpletest.rc-m7xvk" in namespace "gc-7044" Jan 22 23:56:15.835: INFO: Deleting pod "simpletest.rc-m9frd" in namespace "gc-7044" Jan 22 23:56:15.880: INFO: Deleting pod "simpletest.rc-mcdsf" in namespace "gc-7044" Jan 22 23:56:15.928: INFO: Deleting pod "simpletest.rc-mrbfz" in namespace "gc-7044" Jan 22 23:56:15.974: INFO: Deleting pod "simpletest.rc-n4cxq" in namespace "gc-7044" Jan 22 23:56:16.020: INFO: Deleting pod "simpletest.rc-nrspz" in namespace "gc-7044" Jan 22 23:56:16.063: INFO: Deleting pod "simpletest.rc-ppxbv" in namespace "gc-7044" Jan 22 23:56:16.108: INFO: Deleting pod "simpletest.rc-pzzst" in namespace "gc-7044" Jan 22 23:56:16.157: INFO: Deleting pod "simpletest.rc-q2k4k" in namespace "gc-7044" Jan 22 23:56:16.200: INFO: Deleting pod "simpletest.rc-q5vb6" in namespace "gc-7044" Jan 22 23:56:16.243: INFO: Deleting pod "simpletest.rc-qt6fh" in namespace "gc-7044" Jan 22 23:56:16.288: INFO: Deleting pod "simpletest.rc-qv2cn" in namespace "gc-7044" Jan 22 23:56:16.329: INFO: Deleting pod "simpletest.rc-r2wnx" in namespace "gc-7044" Jan 22 23:56:16.371: INFO: Deleting pod "simpletest.rc-rhcch" in namespace "gc-7044" Jan 22 23:56:16.413: INFO: Deleting pod "simpletest.rc-s4q62" in namespace "gc-7044" Jan 22 23:56:16.455: INFO: Deleting pod "simpletest.rc-s7vj8" in namespace "gc-7044" Jan 22 23:56:16.499: INFO: Deleting pod "simpletest.rc-s8wfb" in namespace "gc-7044" Jan 22 23:56:16.543: INFO: Deleting pod "simpletest.rc-sc7vp" in namespace "gc-7044" Jan 22 23:56:16.587: INFO: Deleting pod "simpletest.rc-sg74f" in namespace "gc-7044" Jan 22 23:56:16.634: INFO: Deleting pod "simpletest.rc-sgjrf" in namespace "gc-7044" Jan 22 23:56:16.682: INFO: Deleting pod "simpletest.rc-slcng" in namespace "gc-7044" Jan 22 23:56:16.725: INFO: Deleting pod "simpletest.rc-ss627" in namespace "gc-7044" Jan 22 23:56:16.768: INFO: Deleting pod "simpletest.rc-st6c8" in namespace "gc-7044" Jan 22 23:56:16.812: INFO: Deleting pod "simpletest.rc-tbxn4" in namespace "gc-7044" Jan 22 23:56:16.855: INFO: Deleting pod "simpletest.rc-tz8fv" in namespace "gc-7044" Jan 22 23:56:16.905: INFO: Deleting pod "simpletest.rc-v7d8p" in namespace "gc-7044" Jan 22 23:56:16.967: INFO: Deleting pod "simpletest.rc-w76x9" in namespace "gc-7044" Jan 22 23:56:17.009: INFO: Deleting pod "simpletest.rc-wbqbf" in namespace "gc-7044" Jan 22 23:56:17.051: INFO: Deleting pod "simpletest.rc-xh4hj" in namespace "gc-7044" Jan 22 23:56:17.100: INFO: Deleting pod "simpletest.rc-z9dw6" in namespace "gc-7044" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 22 23:56:17.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-7044" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":61,"completed":20,"skipped":2218,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould run and stop simple daemon [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:56:17.214: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop simple daemon [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating simple DaemonSet "daemon-set" �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 22 23:56:17.671: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:17.705: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:17.705: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:18.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:18.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:18.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:19.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:19.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:19.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:20.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:20.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:20.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:21.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:21.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:21.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:22.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:22.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:22.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:23.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:23.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:23.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:24.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:24.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:24.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:25.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:25.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:25.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:26.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:26.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:26.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:27.747: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:27.780: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:27.780: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:28.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:28.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:28.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:29.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:29.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:29.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:30.744: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:30.779: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:30.779: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:31.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:31.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:31.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:32.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:32.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:32.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:33.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:33.772: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:33.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:34.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:34.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:34.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:35.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:35.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:35.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:36.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:36.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:36.775: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:37.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:37.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:37.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:38.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:38.772: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:38.772: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:39.742: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:39.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:39.775: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:40.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:40.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:40.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:41.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:41.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:41.775: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:42.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:42.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:42.775: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:43.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:43.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:43.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:44.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:44.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:44.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:45.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:45.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:45.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:46.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:46.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:46.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:47.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:47.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:47.775: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:48.742: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:48.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:48.775: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:49.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:49.772: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:49.772: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:50.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:50.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:50.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:51.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:51.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:51.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:52.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:52.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:52.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:53.744: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:53.777: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:53.777: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:54.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:54.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:54.775: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:55.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:55.776: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:55.776: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:56.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:56.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:56.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:57.765: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:57.807: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:57.807: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:58.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:58.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:58.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:56:59.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:56:59.772: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:56:59.772: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:00.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:00.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:00.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:01.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:01.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:01.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:02.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:02.780: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:02.780: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:03.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:03.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:03.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:04.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:04.772: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:04.772: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:05.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:05.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:05.775: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:06.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:06.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:06.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:07.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:07.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:07.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:08.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:08.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:08.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:09.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:09.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:09.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:10.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:10.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:10.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:11.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:11.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:11.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:12.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:12.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:12.774: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:13.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:13.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:13.775: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:14.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:14.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:14.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:15.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:15.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:15.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:16.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:16.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:16.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:17.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:17.772: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:17.772: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:18.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:18.778: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:18.778: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:19.739: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:19.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:19.773: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:20.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:20.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:20.775: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:57:21.749: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:21.783: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:21.783: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:22.742: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:22.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:22.775: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:23.741: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:23.774: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:23.774: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:24.740: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:24.775: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 22 23:57:24.775: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP�[0m: Stop a daemon pod, check that the daemon pod is revived. Jan 22 23:57:24.915: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:24.948: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:24.948: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:25.984: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:26.018: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:26.018: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:26.985: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:27.019: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:27.019: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:27.984: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:28.018: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:28.018: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:28.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:29.016: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:29.016: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:29.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:30.016: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:30.016: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:30.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:31.016: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:31.016: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:31.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:32.016: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:32.016: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:32.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:33.017: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:33.017: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:33.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:34.017: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:34.017: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:34.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:35.016: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:35.016: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:35.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:36.016: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:36.016: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:36.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:37.015: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:37.016: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:37.984: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:38.017: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:38.017: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:38.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:39.016: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:39.016: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:39.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:40.016: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:40.016: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:40.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:41.016: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:41.016: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:41.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:42.017: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:42.017: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:42.984: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:43.017: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:43.017: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:43.984: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:44.017: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:44.017: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:44.984: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:45.018: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:57:45.018: INFO: Node capz-conf-96jhk is running 0 daemon pod, expected 1 Jan 22 23:57:45.983: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:57:46.017: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 22 23:57:46.017: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1851, will wait for the garbage collector to delete the pods Jan 22 23:57:46.175: INFO: Deleting DaemonSet.extensions daemon-set took: 41.395929ms Jan 22 23:57:46.275: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.284069ms Jan 22 23:57:51.511: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:57:51.511: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 22 23:57:51.550: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"13403"},"items":null} Jan 22 23:57:51.583: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"13403"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:57:51.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-1851" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":61,"completed":21,"skipped":2301,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould succeed in writing subpaths in container [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:57:51.765: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: creating a file in subpath Jan 22 23:58:06.156: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-4649 PodName:var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 22 23:58:06.156: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 22 23:58:06.157: INFO: ExecWithOptions: Clientset creation Jan 22 23:58:06.157: INFO: ExecWithOptions: execute(POST https://capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-4649/pods/var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP�[0m: test for file in mounted path Jan 22 23:58:06.498: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-4649 PodName:var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 22 23:58:06.498: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 22 23:58:06.499: INFO: ExecWithOptions: Clientset creation Jan 22 23:58:06.500: INFO: ExecWithOptions: execute(POST https://capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/var-expansion-4649/pods/var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) �[1mSTEP�[0m: updating the annotation value Jan 22 23:58:07.370: INFO: Successfully updated pod "var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b" �[1mSTEP�[0m: waiting for annotated pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 22 23:58:07.403: INFO: Deleting pod "var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b" in namespace "var-expansion-4649" Jan 22 23:58:07.443: INFO: Wait up to 5m0s for pod "var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 22 23:58:13.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-4649" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":61,"completed":22,"skipped":2405,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Kubelet [Slow]�[0m �[90mkubelet GMSA support�[0m �[0mwhen creating a pod with correct GMSA credential specs�[0m �[1mpasses the credential specs down to the Pod's containers�[0m �[37mtest/e2e/windows/gmsa_kubelet.go:45�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:58:13.600: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gmsa-kubelet-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] passes the credential specs down to the Pod's containers test/e2e/windows/gmsa_kubelet.go:45 �[1mSTEP�[0m: creating a pod with correct GMSA specs Jan 22 23:58:13.904: INFO: The status of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:58:15.940: INFO: The status of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:58:17.938: INFO: The status of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:58:19.939: INFO: The status of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 22 23:58:21.939: INFO: The status of Pod with-correct-gmsa-specs is Running (Ready = true) �[1mSTEP�[0m: checking the domain reported by nltest in the containers Jan 22 23:58:21.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-3956 exec --namespace=gmsa-kubelet-test-windows-3956 with-correct-gmsa-specs --container=container1 -- nltest /PARENTDOMAIN' Jan 22 23:58:22.850: INFO: stderr: "" Jan 22 23:58:22.850: INFO: stdout: "acme.com. (1)\r\nThe command completed successfully\r\n" Jan 22 23:58:22.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-3956 exec --namespace=gmsa-kubelet-test-windows-3956 with-correct-gmsa-specs --container=container2 -- nltest /PARENTDOMAIN' Jan 22 23:58:23.410: INFO: stderr: "" Jan 22 23:58:23.410: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n" [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:188 Jan 22 23:58:23.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gmsa-kubelet-test-windows-3956" for this suite. �[32m•�[0m{"msg":"PASSED [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] kubelet GMSA support when creating a pod with correct GMSA credential specs passes the credential specs down to the Pod's containers","total":61,"completed":23,"skipped":2632,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould list and delete a collection of DaemonSets [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:58:23.483: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating simple DaemonSet "daemon-set" �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 22 23:58:23.929: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:58:23.962: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:58:23.962: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:58:24.996: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:58:25.030: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:58:25.030: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:58:25.997: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:58:26.030: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:58:26.030: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:58:26.997: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:58:27.031: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:58:27.031: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:58:27.997: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:58:28.032: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:58:28.032: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:58:28.998: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:58:29.031: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 22 23:58:29.031: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP�[0m: listing all DeamonSets �[1mSTEP�[0m: DeleteCollection of the DaemonSets �[1mSTEP�[0m: Verify that ReplicaSets have been deleted [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 Jan 22 23:58:29.236: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"13635"},"items":null} Jan 22 23:58:29.271: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"13635"},"items":[{"metadata":{"name":"daemon-set-csl9w","generateName":"daemon-set-","namespace":"daemonsets-9412","uid":"5f65b1be-96d0-4f14-a052-c221bc1a42f0","resourceVersion":"13635","creationTimestamp":"2023-01-22T23:58:23Z","deletionTimestamp":"2023-01-22T23:58:59Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"f808545888da8fa34ef50c7342af3a9a31bce07aa7f2b4d958a2faca6b326473","cni.projectcalico.org/podIP":"192.168.14.42/32","cni.projectcalico.org/podIPs":"192.168.14.42/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"70ca2e84-ac83-40e0-93a5-87fafa3b28f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70ca2e84-ac83-40e0-93a5-87fafa3b28f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-bp6l8","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-bp6l8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-2xrmj","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-2xrmj"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:28Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:28Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"}],"hostIP":"10.1.0.5","podIP":"192.168.14.42","podIPs":[{"ip":"192.168.14.42"}],"startTime":"2023-01-22T23:58:23Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-22T23:58:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://e2ca4f87e85b1f8de4ebabe8b53846b496bab83db74eee1e9b34bdb3d9ca60d4","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-lpchl","generateName":"daemon-set-","namespace":"daemonsets-9412","uid":"0c8eff20-b12b-47e5-8a27-adc41c9c9751","resourceVersion":"13634","creationTimestamp":"2023-01-22T23:58:23Z","deletionTimestamp":"2023-01-22T23:58:59Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"d349259a8543860d0148506b5972fe6c52c93c43645616144e5c179f45d6e5c4","cni.projectcalico.org/podIP":"192.168.198.34/32","cni.projectcalico.org/podIPs":"192.168.198.34/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"70ca2e84-ac83-40e0-93a5-87fafa3b28f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70ca2e84-ac83-40e0-93a5-87fafa3b28f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.198.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-xm2h5","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-xm2h5","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-96jhk","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-96jhk"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:27Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:27Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"}],"hostIP":"10.1.0.4","podIP":"192.168.198.34","podIPs":[{"ip":"192.168.198.34"}],"startTime":"2023-01-22T23:58:23Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-22T23:58:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://11f00f8d177ab9ad982484e50b8cd6d456d7f35aeddbb98006233e4be238a22b","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:58:29.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-9412" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":61,"completed":24,"skipped":2714,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case)�[0m �[1mShould scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:98�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 22 23:58:29.448: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container test/e2e/autoscaling/horizontal_pod_autoscaling.go:98 �[1mSTEP�[0m: Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-2614 �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-2614 I0122 23:58:29.766016 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-2614, replica count: 1 �[1mSTEP�[0m: Running controller I0122 23:58:39.819004 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-2614 I0122 23:58:39.909937 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-2614, replica count: 1 I0122 23:58:49.963629 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 22 23:58:54.966: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 22 23:58:55.000: INFO: RC rs: consume 125 millicores in total Jan 22 23:58:55.000: INFO: RC rs: sending request to consume 0 millicores Jan 22 23:58:55.000: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=0&requestSizeMillicores=100 } Jan 22 23:58:55.037: INFO: RC rs: setting consumption to 125 millicores in total Jan 22 23:58:55.037: INFO: RC rs: consume 0 MB in total Jan 22 23:58:55.037: INFO: RC rs: setting consumption to 0 MB in total Jan 22 23:58:55.037: INFO: RC rs: sending request to consume 0 MB Jan 22 23:58:55.037: INFO: RC rs: consume custom metric 0 in total Jan 22 23:58:55.037: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:58:55.037: INFO: RC rs: setting bump of metric QPS to 0 in total Jan 22 23:58:55.037: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:58:55.037: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:58:55.106: INFO: waiting for 3 replicas (current: 1) Jan 22 23:59:15.141: INFO: waiting for 3 replicas (current: 1) Jan 22 23:59:25.037: INFO: RC rs: sending request to consume 125 millicores Jan 22 23:59:25.037: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=125&requestSizeMillicores=100 } Jan 22 23:59:25.072: INFO: RC rs: sending request to consume 0 MB Jan 22 23:59:25.073: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:59:25.073: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:59:25.073: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:59:35.143: INFO: waiting for 3 replicas (current: 1) Jan 22 23:59:55.101: INFO: RC rs: sending request to consume 125 millicores Jan 22 23:59:55.101: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=125&requestSizeMillicores=100 } Jan 22 23:59:55.108: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 22 23:59:55.108: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 22 23:59:55.110: INFO: RC rs: sending request to consume 0 MB Jan 22 23:59:55.110: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 22 23:59:55.142: INFO: waiting for 3 replicas (current: 3) Jan 22 23:59:55.142: INFO: RC rs: consume 500 millicores in total Jan 22 23:59:55.143: INFO: RC rs: setting consumption to 500 millicores in total Jan 22 23:59:55.177: INFO: waiting for 5 replicas (current: 3) Jan 23 00:00:15.213: INFO: waiting for 5 replicas (current: 3) Jan 23 00:00:25.144: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 23 00:00:25.144: INFO: RC rs: sending request to consume 500 millicores Jan 23 00:00:25.144: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=500&requestSizeMillicores=100 } Jan 23 00:00:25.144: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:00:25.144: INFO: RC rs: sending request to consume 0 MB Jan 23 00:00:25.145: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:00:35.211: INFO: waiting for 5 replicas (current: 3) Jan 23 00:00:55.182: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 23 00:00:55.182: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:00:55.204: INFO: RC rs: sending request to consume 0 MB Jan 23 00:00:55.204: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:00:55.204: INFO: RC rs: sending request to consume 500 millicores Jan 23 00:00:55.204: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2614/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=500&requestSizeMillicores=100 } Jan 23 00:00:55.213: INFO: waiting for 5 replicas (current: 5) �[1mSTEP�[0m: Removing consuming RC rs Jan 23 00:00:55.250: INFO: RC rs: stopping metric consumer Jan 23 00:00:55.250: INFO: RC rs: stopping mem consumer Jan 23 00:00:55.261: INFO: RC rs: stopping CPU consumer �[1mSTEP�[0m: deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-2614, will wait for the garbage collector to delete the pods Jan 23 00:01:05.392: INFO: Deleting ReplicaSet.apps rs took: 45.836173ms Jan 23 00:01:05.492: INFO: Terminating ReplicaSet.apps rs pods took: 100.708915ms �[1mSTEP�[0m: deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-2614, will wait for the garbage collector to delete the pods Jan 23 00:01:08.489: INFO: Deleting ReplicationController rs-ctrl took: 37.071202ms Jan 23 00:01:08.590: INFO: Terminating ReplicationController rs-ctrl pods took: 100.548617ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 23 00:01:10.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-2614" for this suite. �[32m• [SLOW TEST:160.969 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:96�[0m Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:98�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container","total":61,"completed":25,"skipped":2871,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould patch a Namespace [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 00:01:10.418: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename namespaces �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should patch a Namespace [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating a Namespace �[1mSTEP�[0m: patching the Namespace �[1mSTEP�[0m: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:188 Jan 23 00:01:10.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "namespaces-6010" for this suite. �[1mSTEP�[0m: Destroying namespace "nspatchtest-2837c980-446e-4fce-9b28-09f45d9af33c-8325" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":61,"completed":26,"skipped":2943,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates resource limits of pods that are allowed to run [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 00:01:10.932: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-pred �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 23 00:01:11.169: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 00:01:11.238: INFO: Waiting for terminating namespaces to be deleted... Jan 23 00:01:11.271: INFO: Logging pods the apiserver thinks is on node capz-conf-2xrmj before test Jan 23 00:01:11.306: INFO: calico-node-windows-v55p5 from calico-system started at 2023-01-22 23:20:32 +0000 UTC (2 container statuses recorded) Jan 23 00:01:11.306: INFO: Container calico-node-felix ready: true, restart count 0 Jan 23 00:01:11.306: INFO: Container calico-node-startup ready: true, restart count 0 Jan 23 00:01:11.306: INFO: containerd-logger-h7zw5 from kube-system started at 2023-01-22 23:20:32 +0000 UTC (1 container statuses recorded) Jan 23 00:01:11.306: INFO: Container containerd-logger ready: true, restart count 0 Jan 23 00:01:11.306: INFO: csi-azuredisk-node-win-b7hkf from kube-system started at 2023-01-22 23:21:02 +0000 UTC (3 container statuses recorded) Jan 23 00:01:11.306: INFO: Container azuredisk ready: true, restart count 0 Jan 23 00:01:11.306: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 00:01:11.306: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 00:01:11.306: INFO: csi-proxy-x5wwz from kube-system started at 2023-01-22 23:21:02 +0000 UTC (1 container statuses recorded) Jan 23 00:01:11.306: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 00:01:11.306: INFO: kube-proxy-windows-bms6h from kube-system started at 2023-01-22 23:20:32 +0000 UTC (1 container statuses recorded) Jan 23 00:01:11.306: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 00:01:11.306: INFO: Logging pods the apiserver thinks is on node capz-conf-96jhk before test Jan 23 00:01:11.343: INFO: calico-node-windows-b54b2 from calico-system started at 2023-01-22 23:19:35 +0000 UTC (2 container statuses recorded) Jan 23 00:01:11.343: INFO: Container calico-node-felix ready: true, restart count 0 Jan 23 00:01:11.343: INFO: Container calico-node-startup ready: true, restart count 0 Jan 23 00:01:11.343: INFO: containerd-logger-k8bhm from kube-system started at 2023-01-22 23:19:35 +0000 UTC (1 container statuses recorded) Jan 23 00:01:11.344: INFO: Container containerd-logger ready: true, restart count 0 Jan 23 00:01:11.344: INFO: csi-azuredisk-node-win-xs4sv from kube-system started at 2023-01-22 23:20:05 +0000 UTC (3 container statuses recorded) Jan 23 00:01:11.344: INFO: Container azuredisk ready: true, restart count 0 Jan 23 00:01:11.344: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 00:01:11.344: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 00:01:11.344: INFO: csi-proxy-vmncx from kube-system started at 2023-01-22 23:20:05 +0000 UTC (1 container statuses recorded) Jan 23 00:01:11.344: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 00:01:11.344: INFO: kube-proxy-windows-mrr95 from kube-system started at 2023-01-22 23:19:35 +0000 UTC (1 container statuses recorded) Jan 23 00:01:11.344: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: verifying the node has the label node capz-conf-2xrmj �[1mSTEP�[0m: verifying the node has the label node capz-conf-96jhk Jan 23 00:01:11.569: INFO: Pod calico-node-windows-b54b2 requesting resource cpu=0m on Node capz-conf-96jhk Jan 23 00:01:11.569: INFO: Pod calico-node-windows-v55p5 requesting resource cpu=0m on Node capz-conf-2xrmj Jan 23 00:01:11.569: INFO: Pod containerd-logger-h7zw5 requesting resource cpu=0m on Node capz-conf-2xrmj Jan 23 00:01:11.569: INFO: Pod containerd-logger-k8bhm requesting resource cpu=0m on Node capz-conf-96jhk Jan 23 00:01:11.569: INFO: Pod csi-azuredisk-node-win-b7hkf requesting resource cpu=0m on Node capz-conf-2xrmj Jan 23 00:01:11.569: INFO: Pod csi-azuredisk-node-win-xs4sv requesting resource cpu=0m on Node capz-conf-96jhk Jan 23 00:01:11.569: INFO: Pod csi-proxy-vmncx requesting resource cpu=0m on Node capz-conf-96jhk Jan 23 00:01:11.569: INFO: Pod csi-proxy-x5wwz requesting resource cpu=0m on Node capz-conf-2xrmj Jan 23 00:01:11.569: INFO: Pod kube-proxy-windows-bms6h requesting resource cpu=0m on Node capz-conf-2xrmj Jan 23 00:01:11.569: INFO: Pod kube-proxy-windows-mrr95 requesting resource cpu=0m on Node capz-conf-96jhk �[1mSTEP�[0m: Starting Pods to consume most of the cluster CPU. Jan 23 00:01:11.569: INFO: Creating a pod which consumes cpu=2800m on Node capz-conf-96jhk Jan 23 00:01:11.611: INFO: Creating a pod which consumes cpu=2800m on Node capz-conf-2xrmj �[1mSTEP�[0m: Creating another pod that requires unavailable amount of CPU. �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-049037b6-5b89-40e5-ae4f-ae34ab8a3c0d.173cc718fc3a504a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1217/filler-pod-049037b6-5b89-40e5-ae4f-ae34ab8a3c0d to capz-conf-96jhk] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-049037b6-5b89-40e5-ae4f-ae34ab8a3c0d.173cc7198464d3a0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.7" already present on machine] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-049037b6-5b89-40e5-ae4f-ae34ab8a3c0d.173cc71989f9e378], Reason = [Created], Message = [Created container filler-pod-049037b6-5b89-40e5-ae4f-ae34ab8a3c0d] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-049037b6-5b89-40e5-ae4f-ae34ab8a3c0d.173cc719ce2f044c], Reason = [Started], Message = [Started container filler-pod-049037b6-5b89-40e5-ae4f-ae34ab8a3c0d] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-c8861e0f-5f49-4508-ad7f-c95ce1f5dd35.173cc718fe8eb098], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1217/filler-pod-c8861e0f-5f49-4508-ad7f-c95ce1f5dd35 to capz-conf-2xrmj] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-c8861e0f-5f49-4508-ad7f-c95ce1f5dd35.173cc71991769678], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.7" already present on machine] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-c8861e0f-5f49-4508-ad7f-c95ce1f5dd35.173cc71997ff66b4], Reason = [Created], Message = [Created container filler-pod-c8861e0f-5f49-4508-ad7f-c95ce1f5dd35] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-c8861e0f-5f49-4508-ad7f-c95ce1f5dd35.173cc719e56ae9a0], Reason = [Started], Message = [Started container filler-pod-c8861e0f-5f49-4508-ad7f-c95ce1f5dd35] �[1mSTEP�[0m: Considering event: Type = [Warning], Name = [additional-pod.173cc71a700f4024], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 Insufficient cpu. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.] �[1mSTEP�[0m: removing the label node off the node capz-conf-2xrmj �[1mSTEP�[0m: verifying the node doesn't have the label node �[1mSTEP�[0m: removing the label node off the node capz-conf-96jhk �[1mSTEP�[0m: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 23 00:01:19.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-pred-1217" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":61,"completed":27,"skipped":3098,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould not schedule jobs when suspended [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 00:01:19.178: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a suspended cronjob �[1mSTEP�[0m: Ensuring no jobs are scheduled �[1mSTEP�[0m: Ensuring no job exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:188 Jan 23 00:06:19.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-6724" for this suite. �[32m• [SLOW TEST:300.479 seconds]�[0m [sig-apps] CronJob �[90mtest/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":61,"completed":28,"skipped":3127,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] StatefulSet�[0m �[90mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mScaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 00:06:19.669: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP�[0m: Creating service test in namespace statefulset-8983 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Initializing watcher for selector baz=blah,foo=bar �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-8983 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-8983 Jan 23 00:06:20.044: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 23 00:06:30.080: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 23 00:06:30.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8983 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 00:06:30.698: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 23 00:06:30.698: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 00:06:30.698: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 00:06:30.732: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 23 00:06:40.769: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 00:06:40.770: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 00:06:40.907: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999463s Jan 23 00:06:41.942: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.966145055s Jan 23 00:06:42.977: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.931015483s Jan 23 00:06:44.011: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.896403209s Jan 23 00:06:45.046: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.862571054s Jan 23 00:06:46.081: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.827022596s Jan 23 00:06:47.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.792721893s Jan 23 00:06:48.164: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.74353164s Jan 23 00:06:49.198: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.709187055s Jan 23 00:06:50.233: INFO: Verifying statefulset ss doesn't scale past 1 for another 675.462816ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8983 Jan 23 00:06:51.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8983 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 00:06:51.845: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 23 00:06:51.845: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 00:06:51.845: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 00:06:51.879: INFO: Found 1 stateful pods, waiting for 3 Jan 23 00:07:01.914: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 00:07:01.914: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 00:07:01.914: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 23 00:07:11.915: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 00:07:11.915: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 00:07:11.915: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Verifying that stateful set ss was scaled up in order �[1mSTEP�[0m: Scale down will halt with unhealthy stateful pod Jan 23 00:07:11.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8983 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 00:07:12.531: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 23 00:07:12.531: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 00:07:12.531: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 00:07:12.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8983 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 00:07:13.142: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 23 00:07:13.142: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 00:07:13.142: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 00:07:13.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8983 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 00:07:13.701: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 23 00:07:13.701: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 00:07:13.701: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 00:07:13.701: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 00:07:13.738: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 23 00:07:23.809: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 00:07:23.809: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 23 00:07:23.809: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 23 00:07:23.919: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999458s Jan 23 00:07:24.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.962981037s Jan 23 00:07:25.987: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.928436561s Jan 23 00:07:27.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.893952863s Jan 23 00:07:28.056: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.859718973s Jan 23 00:07:29.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.824997759s Jan 23 00:07:30.126: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.789862723s Jan 23 00:07:31.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.755440192s Jan 23 00:07:32.201: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.720623893s Jan 23 00:07:33.236: INFO: Verifying statefulset ss doesn't scale past 3 for another 680.796724ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8983 Jan 23 00:07:34.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8983 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 00:07:34.823: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 23 00:07:34.823: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 00:07:34.823: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 00:07:34.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8983 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 00:07:35.334: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 23 00:07:35.334: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 00:07:35.334: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 00:07:35.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8983 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 00:07:35.893: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 23 00:07:35.893: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 00:07:35.893: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 00:07:35.893: INFO: Scaling statefulset ss to 0 �[1mSTEP�[0m: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 23 00:07:56.031: INFO: Deleting all statefulset in ns statefulset-8983 Jan 23 00:07:56.064: INFO: Scaling statefulset ss to 0 Jan 23 00:07:56.163: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 00:07:56.196: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:188 Jan 23 00:07:56.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8983" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":61,"completed":29,"skipped":3577,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow]�[0m �[90mGMSA support�[0m �[1mcan read and write file to remote SMB folder�[0m �[37mtest/e2e/windows/gmsa_full.go:167�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 00:07:56.369: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gmsa-full-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] can read and write file to remote SMB folder test/e2e/windows/gmsa_full.go:167 �[1mSTEP�[0m: finding the worker node that fulfills this test's assumptions Jan 23 00:07:56.640: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 23 00:07:56.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gmsa-full-test-windows-6335" for this suite. �[36m�[1mS [SKIPPING] [0.342 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[90mtest/e2e/windows/framework.go:27�[0m GMSA support �[90mtest/e2e/windows/gmsa_full.go:96�[0m �[36m�[1mcan read and write file to remote SMB folder [It]�[0m �[90mtest/e2e/windows/gmsa_full.go:167�[0m �[36mExpected to find exactly one node with the "agentpool=windowsgmsa" label, found 0�[0m test/e2e/windows/gmsa_full.go:173 �[90m------------------------------�[0m �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 00:07:56.712: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted Jan 23 00:08:03.163: INFO: 80 pods remaining Jan 23 00:08:03.163: INFO: 80 pods has nil DeletionTimestamp Jan 23 00:08:03.163: INFO: Jan 23 00:08:04.162: INFO: 68 pods remaining Jan 23 00:08:04.163: INFO: 68 pods has nil DeletionTimestamp Jan 23 00:08:04.163: INFO: Jan 23 00:08:05.160: INFO: 60 pods remaining Jan 23 00:08:05.160: INFO: 60 pods has nil DeletionTimestamp Jan 23 00:08:05.160: INFO: Jan 23 00:08:06.156: INFO: 40 pods remaining Jan 23 00:08:06.156: INFO: 40 pods has nil DeletionTimestamp Jan 23 00:08:06.156: INFO: Jan 23 00:08:07.157: INFO: 28 pods remaining Jan 23 00:08:07.157: INFO: 28 pods has nil DeletionTimestamp Jan 23 00:08:07.157: INFO: Jan 23 00:08:08.155: INFO: 20 pods remaining Jan 23 00:08:08.156: INFO: 20 pods has nil DeletionTimestamp Jan 23 00:08:08.156: INFO: �[1mSTEP�[0m: Gathering metrics Jan 23 00:08:09.297: INFO: The status of Pod kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj is Running (Ready = true) Jan 23 00:08:09.642: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 00:08:09.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-9386" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":61,"completed":30,"skipped":3666,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould cap back-off at MaxContainerBackOff [Slow][NodeConformance]�[0m �[37mtest/e2e/common/node/pods.go:723�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 00:08:09.713: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:191 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:723 Jan 23 00:08:10.018: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 23 00:08:12.053: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 23 00:08:14.052: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 23 00:08:16.052: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 23 00:08:18.052: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 23 00:08:20.052: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 23 00:08:22.052: INFO: The status of Pod back-off-cap is Running (Ready = true) �[1mSTEP�[0m: getting restart delay when capped Jan 23 00:19:46.172: INFO: getRestartDelay: restartCount = 7, finishedAt=2023-01-23 00:14:29 +0000 UTC restartedAt=2023-01-23 00:19:45 +0000 UTC (5m16s) Jan 23 00:24:59.155: INFO: getRestartDelay: restartCount = 8, finishedAt=2023-01-23 00:19:50 +0000 UTC restartedAt=2023-01-23 00:24:58 +0000 UTC (5m8s) Jan 23 00:30:11.136: INFO: getRestartDelay: restartCount = 9, finishedAt=2023-01-23 00:25:03 +0000 UTC restartedAt=2023-01-23 00:30:10 +0000 UTC (5m7s) �[1mSTEP�[0m: getting restart delay after a capped delay Jan 23 00:35:21.032: INFO: getRestartDelay: restartCount = 10, finishedAt=2023-01-23 00:30:15 +0000 UTC restartedAt=2023-01-23 00:35:19 +0000 UTC (5m4s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:188 Jan 23 00:35:21.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4730" for this suite. �[32m• [SLOW TEST:1631.415 seconds]�[0m [sig-node] Pods �[90mtest/e2e/common/node/framework.go:23�[0m should cap back-off at MaxContainerBackOff [Slow][NodeConformance] �[90mtest/e2e/common/node/pods.go:723�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":61,"completed":31,"skipped":3717,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] ReplicationController�[0m �[1mShould scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:61�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 00:35:21.129: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:61 �[1mSTEP�[0m: Running consuming RC rc via /v1, Kind=ReplicationController with 1 replicas �[1mSTEP�[0m: creating replication controller rc in namespace horizontal-pod-autoscaling-3993 I0123 00:35:21.446160 14 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-3993, replica count: 1 �[1mSTEP�[0m: Running controller I0123 00:35:31.497950 14 runners.go:193] rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-3993 I0123 00:35:31.580590 14 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-3993, replica count: 1 I0123 00:35:41.631870 14 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 00:35:46.632: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Jan 23 00:35:46.665: INFO: RC rc: consume 250 millicores in total Jan 23 00:35:46.666: INFO: RC rc: sending request to consume 0 millicores Jan 23 00:35:46.666: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=0&requestSizeMillicores=100 } Jan 23 00:35:46.704: INFO: RC rc: setting consumption to 250 millicores in total Jan 23 00:35:46.704: INFO: RC rc: consume 0 MB in total Jan 23 00:35:46.704: INFO: RC rc: setting consumption to 0 MB in total Jan 23 00:35:46.705: INFO: RC rc: sending request to consume 0 MB Jan 23 00:35:46.705: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:35:46.705: INFO: RC rc: consume custom metric 0 in total Jan 23 00:35:46.705: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:35:46.706: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:35:46.741: INFO: RC rc: setting bump of metric QPS to 0 in total Jan 23 00:35:46.811: INFO: waiting for 3 replicas (current: 1) Jan 23 00:36:06.845: INFO: waiting for 3 replicas (current: 1) Jan 23 00:36:16.704: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:36:16.705: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:36:16.741: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:36:16.741: INFO: RC rc: sending request to consume 0 MB Jan 23 00:36:16.741: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:36:16.741: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:36:26.846: INFO: waiting for 3 replicas (current: 1) Jan 23 00:36:46.845: INFO: waiting for 3 replicas (current: 2) Jan 23 00:36:49.786: INFO: RC rc: sending request to consume 0 MB Jan 23 00:36:49.786: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:36:49.786: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:36:49.786: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:36:49.786: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:36:49.787: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:37:06.845: INFO: waiting for 3 replicas (current: 3) Jan 23 00:37:06.878: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:37:06.911: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026afccc} Jan 23 00:37:16.947: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:37:16.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7c1a4} Jan 23 00:37:19.826: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:37:19.826: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:37:22.839: INFO: RC rc: sending request to consume 0 MB Jan 23 00:37:22.839: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:37:22.839: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:37:22.839: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:37:26.950: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:37:26.983: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae6ec} Jan 23 00:37:36.947: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:37:36.981: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae8cc} Jan 23 00:37:46.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:37:46.978: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7c224} Jan 23 00:37:49.864: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:37:49.864: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:37:52.884: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:37:52.884: INFO: RC rc: sending request to consume 0 MB Jan 23 00:37:52.884: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:37:52.884: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:37:56.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:37:56.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7c63c} Jan 23 00:38:06.947: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:38:06.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7c89c} Jan 23 00:38:16.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:38:16.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2c49c} Jan 23 00:38:19.900: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:38:19.900: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:38:22.920: INFO: RC rc: sending request to consume 0 MB Jan 23 00:38:22.920: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:38:22.925: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:38:22.925: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:38:26.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:38:26.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7cafc} Jan 23 00:38:36.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:38:36.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7cf5c} Jan 23 00:38:46.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:38:46.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae3ec} Jan 23 00:38:49.942: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:38:49.942: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:38:52.955: INFO: RC rc: sending request to consume 0 MB Jan 23 00:38:52.955: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:38:52.966: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:38:52.966: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:38:56.947: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:38:56.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7d144} Jan 23 00:39:06.947: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:39:06.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7d6dc} Jan 23 00:39:16.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:39:16.978: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003daebcc} Jan 23 00:39:19.981: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:39:19.981: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:39:22.994: INFO: RC rc: sending request to consume 0 MB Jan 23 00:39:22.994: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:39:23.007: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:39:23.007: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:39:26.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:39:26.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026ae08c} Jan 23 00:39:36.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:39:36.978: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003daf164} Jan 23 00:39:46.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:39:46.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae1bc} Jan 23 00:39:50.018: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:39:50.018: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:39:53.031: INFO: RC rc: sending request to consume 0 MB Jan 23 00:39:53.031: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:39:53.049: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:39:53.049: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:39:56.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:39:56.978: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026ae2cc} Jan 23 00:40:06.948: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:40:06.981: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026ae4cc} Jan 23 00:40:16.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:40:16.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2c98c} Jan 23 00:40:20.055: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:40:20.055: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:40:23.067: INFO: RC rc: sending request to consume 0 MB Jan 23 00:40:23.067: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:40:23.089: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:40:23.089: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:40:26.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:40:26.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae81c} Jan 23 00:40:36.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:40:36.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026aef7c} Jan 23 00:40:46.947: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:40:46.981: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003daea04} Jan 23 00:40:50.092: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:40:50.093: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:40:53.102: INFO: RC rc: sending request to consume 0 MB Jan 23 00:40:53.102: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:40:53.130: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:40:53.131: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:40:56.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:40:56.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026af824} Jan 23 00:41:06.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:41:06.978: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2ce3c} Jan 23 00:41:16.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:41:16.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026afcf4} Jan 23 00:41:20.131: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:41:20.131: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:41:23.138: INFO: RC rc: sending request to consume 0 MB Jan 23 00:41:23.139: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:41:23.172: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:41:23.172: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:41:26.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:41:26.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2d24c} Jan 23 00:41:36.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:41:36.978: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003daf3fc} Jan 23 00:41:46.954: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:41:47.002: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7c08c} Jan 23 00:41:50.169: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:41:50.169: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:41:53.174: INFO: RC rc: sending request to consume 0 MB Jan 23 00:41:53.175: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:41:53.212: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:41:53.213: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:41:56.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:41:56.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2c95c} Jan 23 00:42:06.947: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:42:06.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2cb5c} Jan 23 00:42:16.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:42:16.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae24c} Jan 23 00:42:20.205: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:42:20.205: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:42:23.212: INFO: RC rc: sending request to consume 0 MB Jan 23 00:42:23.212: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:42:23.255: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:42:23.255: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:42:26.952: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:42:26.986: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2d014} Jan 23 00:42:36.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:42:36.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2d2cc} Jan 23 00:42:46.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:42:46.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7c86c} Jan 23 00:42:50.245: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:42:50.246: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:42:53.252: INFO: RC rc: sending request to consume 0 MB Jan 23 00:42:53.252: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:42:53.298: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:42:53.298: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:42:56.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:42:56.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2d39c} Jan 23 00:43:06.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:43:06.981: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae9b4} Jan 23 00:43:16.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:43:16.978: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2d70c} Jan 23 00:43:20.285: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:43:20.285: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:43:23.289: INFO: RC rc: sending request to consume 0 MB Jan 23 00:43:23.289: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:43:23.340: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:43:23.340: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:43:26.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:43:26.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003daef5c} Jan 23 00:43:36.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:43:36.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003daf1bc} Jan 23 00:43:46.947: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:43:46.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2c08c} Jan 23 00:43:50.321: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:43:50.321: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:43:53.324: INFO: RC rc: sending request to consume 0 MB Jan 23 00:43:53.324: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:43:53.384: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:43:53.384: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:43:56.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:43:56.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7c1ac} Jan 23 00:44:06.948: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:44:06.981: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026ae10c} Jan 23 00:44:16.949: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:44:16.982: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7c40c} Jan 23 00:44:20.361: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:44:20.362: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:44:23.361: INFO: RC rc: sending request to consume 0 MB Jan 23 00:44:23.361: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:44:23.426: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:44:23.426: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:44:26.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:44:26.978: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2d0f4} Jan 23 00:44:36.947: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:44:36.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7c99c} Jan 23 00:44:46.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:44:46.978: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2d37c} Jan 23 00:44:50.401: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:44:50.401: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:44:53.399: INFO: RC rc: sending request to consume 0 MB Jan 23 00:44:53.399: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:44:53.466: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:44:53.466: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:44:56.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:44:56.978: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc001c7cfcc} Jan 23 00:45:06.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:45:06.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026ae40c} Jan 23 00:45:16.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:45:16.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026ae60c} Jan 23 00:45:20.436: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:45:20.436: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:45:23.440: INFO: RC rc: sending request to consume 0 MB Jan 23 00:45:23.440: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:45:23.507: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:45:23.507: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:45:26.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:45:26.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000b68bac} Jan 23 00:45:36.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:45:36.977: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000b68da4} Jan 23 00:45:46.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:45:46.980: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026ae86c} Jan 23 00:45:50.472: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:45:50.472: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:45:53.477: INFO: RC rc: sending request to consume 0 MB Jan 23 00:45:53.477: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:45:53.549: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:45:53.549: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:45:56.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:45:56.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae25c} Jan 23 00:46:06.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:46:06.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae32c} Jan 23 00:46:16.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:46:16.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae3fc} Jan 23 00:46:20.508: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:46:20.508: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:46:23.514: INFO: RC rc: sending request to consume 0 MB Jan 23 00:46:23.514: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:46:23.593: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:46:23.593: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:46:26.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:46:26.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003dae884} Jan 23 00:46:36.945: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:46:36.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2c9ec} Jan 23 00:46:46.954: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:46:47.004: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2ca9c} Jan 23 00:46:50.546: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:46:50.546: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:46:53.551: INFO: RC rc: sending request to consume 0 MB Jan 23 00:46:53.552: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:46:53.637: INFO: RC rc: sending request to consume 250 millicores Jan 23 00:46:53.637: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 00:46:56.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:46:56.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2d254} Jan 23 00:47:06.946: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:47:06.979: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2d45c} Jan 23 00:47:07.013: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:47:07.046: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:36:46 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc000f2d594} Jan 23 00:47:07.046: INFO: Number of replicas was stable over 10m0s Jan 23 00:47:07.046: INFO: RC rc: consume 700 millicores in total Jan 23 00:47:07.046: INFO: RC rc: setting consumption to 700 millicores in total Jan 23 00:47:07.079: INFO: waiting for 5 replicas (current: 3) Jan 23 00:47:20.584: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:47:20.584: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:47:23.590: INFO: RC rc: sending request to consume 0 MB Jan 23 00:47:23.590: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:47:23.677: INFO: RC rc: sending request to consume 700 millicores Jan 23 00:47:23.678: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 23 00:47:27.116: INFO: waiting for 5 replicas (current: 3) Jan 23 00:47:47.115: INFO: waiting for 5 replicas (current: 3) Jan 23 00:47:50.621: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:47:50.622: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:47:53.628: INFO: RC rc: sending request to consume 0 MB Jan 23 00:47:53.628: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:47:53.719: INFO: RC rc: sending request to consume 700 millicores Jan 23 00:47:53.719: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3993/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 23 00:48:07.114: INFO: waiting for 5 replicas (current: 5) �[1mSTEP�[0m: Removing consuming RC rc Jan 23 00:48:07.153: INFO: RC rc: stopping metric consumer Jan 23 00:48:07.153: INFO: RC rc: stopping CPU consumer Jan 23 00:48:07.153: INFO: RC rc: stopping mem consumer �[1mSTEP�[0m: deleting ReplicationController rc in namespace horizontal-pod-autoscaling-3993, will wait for the garbage collector to delete the pods Jan 23 00:48:17.275: INFO: Deleting ReplicationController rc took: 37.059361ms Jan 23 00:48:17.376: INFO: Terminating ReplicationController rc pods took: 101.052124ms �[1mSTEP�[0m: deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-3993, will wait for the garbage collector to delete the pods Jan 23 00:48:19.750: INFO: Deleting ReplicationController rc-ctrl took: 38.094223ms Jan 23 00:48:19.851: INFO: Terminating ReplicationController rc-ctrl pods took: 101.11312ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 23 00:48:22.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-3993" for this suite. �[32m• [SLOW TEST:781.075 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicationController �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:59�[0m Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:61�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","total":61,"completed":32,"skipped":3759,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] ReplicationController�[0m �[1mShould scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:64�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 00:48:22.211: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:64 �[1mSTEP�[0m: Running consuming RC rc via /v1, Kind=ReplicationController with 5 replicas �[1mSTEP�[0m: creating replication controller rc in namespace horizontal-pod-autoscaling-7541 I0123 00:48:22.530660 14 runners.go:193] Created replication controller with name: rc, namespace: horizontal-pod-autoscaling-7541, replica count: 5 �[1mSTEP�[0m: Running controller I0123 00:48:32.582707 14 runners.go:193] rc Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-7541 I0123 00:48:32.673851 14 runners.go:193] Created replication controller with name: rc-ctrl, namespace: horizontal-pod-autoscaling-7541, replica count: 1 I0123 00:48:42.726652 14 runners.go:193] rc-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 00:48:47.727: INFO: Waiting for amount of service:rc-ctrl endpoints to be 1 Jan 23 00:48:47.761: INFO: RC rc: consume 325 millicores in total Jan 23 00:48:47.761: INFO: RC rc: sending request to consume 0 millicores Jan 23 00:48:47.761: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=0&requestSizeMillicores=100 } Jan 23 00:48:47.798: INFO: RC rc: setting consumption to 325 millicores in total Jan 23 00:48:47.798: INFO: RC rc: consume 0 MB in total Jan 23 00:48:47.798: INFO: RC rc: setting consumption to 0 MB in total Jan 23 00:48:47.798: INFO: RC rc: sending request to consume 0 MB Jan 23 00:48:47.798: INFO: RC rc: consume custom metric 0 in total Jan 23 00:48:47.798: INFO: RC rc: setting bump of metric QPS to 0 in total Jan 23 00:48:47.798: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:48:47.798: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:48:47.799: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:48:47.897: INFO: waiting for 3 replicas (current: 5) Jan 23 00:49:07.931: INFO: waiting for 3 replicas (current: 5) Jan 23 00:49:17.798: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:49:17.798: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:49:17.866: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:49:17.866: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:49:17.866: INFO: RC rc: sending request to consume 0 MB Jan 23 00:49:17.867: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:49:27.930: INFO: waiting for 3 replicas (current: 5) Jan 23 00:49:47.860: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:49:47.860: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:49:47.902: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:49:47.902: INFO: RC rc: sending request to consume 0 MB Jan 23 00:49:47.902: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:49:47.903: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:49:47.931: INFO: waiting for 3 replicas (current: 5) Jan 23 00:50:07.932: INFO: waiting for 3 replicas (current: 5) Jan 23 00:50:17.903: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:50:17.903: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:50:17.962: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:50:17.962: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:50:17.962: INFO: RC rc: sending request to consume 0 MB Jan 23 00:50:17.963: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:50:27.934: INFO: waiting for 3 replicas (current: 5) Jan 23 00:50:47.940: INFO: waiting for 3 replicas (current: 5) Jan 23 00:50:47.944: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:50:47.945: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:50:48.001: INFO: RC rc: sending request to consume 0 MB Jan 23 00:50:48.001: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:50:48.001: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:50:48.001: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:51:07.933: INFO: waiting for 3 replicas (current: 5) Jan 23 00:51:17.986: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:51:17.986: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:51:18.038: INFO: RC rc: sending request to consume 0 MB Jan 23 00:51:18.038: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:51:18.038: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:51:18.038: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:51:27.932: INFO: waiting for 3 replicas (current: 5) Jan 23 00:51:47.932: INFO: waiting for 3 replicas (current: 5) Jan 23 00:51:48.029: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:51:48.029: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:51:48.073: INFO: RC rc: sending request to consume 0 MB Jan 23 00:51:48.073: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:51:48.073: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:51:48.073: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:52:07.931: INFO: waiting for 3 replicas (current: 5) Jan 23 00:52:18.073: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:52:18.074: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:52:18.109: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:52:18.109: INFO: RC rc: sending request to consume 0 MB Jan 23 00:52:18.109: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:52:18.109: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:52:27.933: INFO: waiting for 3 replicas (current: 5) Jan 23 00:52:47.931: INFO: waiting for 3 replicas (current: 5) Jan 23 00:52:48.115: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:52:48.115: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:52:48.145: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:52:48.145: INFO: RC rc: sending request to consume 0 MB Jan 23 00:52:48.145: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:52:48.145: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:53:07.933: INFO: waiting for 3 replicas (current: 5) Jan 23 00:53:18.157: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:53:18.158: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:53:18.180: INFO: RC rc: sending request to consume 0 MB Jan 23 00:53:18.180: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:53:18.187: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:53:18.187: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:53:27.933: INFO: waiting for 3 replicas (current: 5) Jan 23 00:53:47.932: INFO: waiting for 3 replicas (current: 5) Jan 23 00:53:48.199: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:53:48.199: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:53:48.215: INFO: RC rc: sending request to consume 0 MB Jan 23 00:53:48.215: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:53:48.222: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:53:48.222: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:54:07.932: INFO: waiting for 3 replicas (current: 3) Jan 23 00:54:07.965: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:54:07.998: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c2e4} Jan 23 00:54:18.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:54:18.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:5 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317e43c} Jan 23 00:54:18.244: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:54:18.245: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:54:18.250: INFO: RC rc: sending request to consume 0 MB Jan 23 00:54:18.250: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:54:18.258: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:54:18.258: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:54:28.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:54:28.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de674} Jan 23 00:54:38.031: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:54:38.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c49c} Jan 23 00:54:48.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:54:48.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de97c} Jan 23 00:54:48.289: INFO: RC rc: sending request to consume 0 MB Jan 23 00:54:48.290: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:54:48.289: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:54:48.291: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:54:48.292: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:54:48.292: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:54:58.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:54:58.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317eb74} Jan 23 00:55:08.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:55:08.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317edec} Jan 23 00:55:18.034: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:55:18.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317eea4} Jan 23 00:55:18.327: INFO: RC rc: sending request to consume 0 MB Jan 23 00:55:18.327: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:55:18.333: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:55:18.333: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:55:18.333: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:55:18.333: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:55:28.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:55:28.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043df2fc} Jan 23 00:55:38.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:55:38.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317f46c} Jan 23 00:55:48.034: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:55:48.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de094} Jan 23 00:55:48.363: INFO: RC rc: sending request to consume 0 MB Jan 23 00:55:48.363: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:55:48.367: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:55:48.367: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:55:48.374: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:55:48.374: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:55:58.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:55:58.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c094} Jan 23 00:56:08.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:56:08.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c14c} Jan 23 00:56:18.034: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:56:18.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043dea3c} Jan 23 00:56:18.398: INFO: RC rc: sending request to consume 0 MB Jan 23 00:56:18.398: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:56:18.402: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:56:18.402: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:56:18.416: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:56:18.416: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:56:28.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:56:28.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043def4c} Jan 23 00:56:38.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:56:38.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043df13c} Jan 23 00:56:48.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:56:48.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c324} Jan 23 00:56:48.434: INFO: RC rc: sending request to consume 0 MB Jan 23 00:56:48.434: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:56:48.438: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:56:48.438: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:56:48.458: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:56:48.458: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:56:58.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:56:58.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043df31c} Jan 23 00:57:08.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:57:08.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043df3e4} Jan 23 00:57:18.034: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:57:18.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317eabc} Jan 23 00:57:18.471: INFO: RC rc: sending request to consume 0 MB Jan 23 00:57:18.471: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:57:18.478: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:57:18.479: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:57:18.500: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:57:18.500: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:57:28.031: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:57:28.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259ca34} Jan 23 00:57:38.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:57:38.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259cb04} Jan 23 00:57:48.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:57:48.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de07c} Jan 23 00:57:48.507: INFO: RC rc: sending request to consume 0 MB Jan 23 00:57:48.508: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:57:48.514: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:57:48.514: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:57:48.541: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:57:48.542: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:57:58.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:57:58.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317e07c} Jan 23 00:58:08.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:58:08.064: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c25c} Jan 23 00:58:18.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:58:18.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c32c} Jan 23 00:58:18.544: INFO: RC rc: sending request to consume 0 MB Jan 23 00:58:18.544: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:58:18.549: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:58:18.549: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:58:18.582: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:58:18.582: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:58:28.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:58:28.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de8ec} Jan 23 00:58:38.034: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:58:38.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259cb4c} Jan 23 00:58:48.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:58:48.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043deb4c} Jan 23 00:58:48.579: INFO: RC rc: sending request to consume 0 MB Jan 23 00:58:48.579: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:58:48.584: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:58:48.584: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:58:48.627: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:58:48.628: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:58:58.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:58:58.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317e554} Jan 23 00:59:08.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:59:08.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259cf5c} Jan 23 00:59:18.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:59:18.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043deed4} Jan 23 00:59:18.616: INFO: RC rc: sending request to consume 0 MB Jan 23 00:59:18.616: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:59:18.619: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:59:18.619: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:59:18.669: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:59:18.670: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:59:28.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:59:28.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043df404} Jan 23 00:59:38.034: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:59:38.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043df4bc} Jan 23 00:59:48.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:59:48.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026ae064} Jan 23 00:59:48.651: INFO: RC rc: sending request to consume 0 MB Jan 23 00:59:48.651: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 00:59:48.653: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 00:59:48.653: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 00:59:48.713: INFO: RC rc: sending request to consume 325 millicores Jan 23 00:59:48.713: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 00:59:58.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 00:59:58.070: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c3ec} Jan 23 01:00:08.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:00:08.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de07c} Jan 23 01:00:18.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:00:18.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de14c} Jan 23 01:00:18.688: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:00:18.688: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:00:18.689: INFO: RC rc: sending request to consume 0 MB Jan 23 01:00:18.689: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:00:18.755: INFO: RC rc: sending request to consume 325 millicores Jan 23 01:00:18.756: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:00:28.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:00:28.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026ae774} Jan 23 01:00:38.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:00:38.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0026ae82c} Jan 23 01:00:48.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:00:48.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de76c} Jan 23 01:00:48.724: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:00:48.724: INFO: RC rc: sending request to consume 0 MB Jan 23 01:00:48.724: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:00:48.724: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:00:48.797: INFO: RC rc: sending request to consume 325 millicores Jan 23 01:00:48.797: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:00:58.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:00:58.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259ca44} Jan 23 01:01:08.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:01:08.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317e24c} Jan 23 01:01:18.034: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:01:18.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317e3c4} Jan 23 01:01:18.760: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:01:18.760: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:01:18.760: INFO: RC rc: sending request to consume 0 MB Jan 23 01:01:18.760: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:01:18.838: INFO: RC rc: sending request to consume 325 millicores Jan 23 01:01:18.839: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:01:28.031: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:01:28.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de9e4} Jan 23 01:01:38.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:01:38.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259cedc} Jan 23 01:01:48.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:01:48.068: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043dec4c} Jan 23 01:01:48.797: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:01:48.798: INFO: RC rc: sending request to consume 0 MB Jan 23 01:01:48.798: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:01:48.798: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:01:48.885: INFO: RC rc: sending request to consume 325 millicores Jan 23 01:01:48.885: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:01:58.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:01:58.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317e144} Jan 23 01:02:08.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:02:08.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317e3bc} Jan 23 01:02:18.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:02:18.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317e484} Jan 23 01:02:18.833: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:02:18.833: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:02:18.833: INFO: RC rc: sending request to consume 0 MB Jan 23 01:02:18.833: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:02:18.929: INFO: RC rc: sending request to consume 325 millicores Jan 23 01:02:18.929: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:02:28.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:02:28.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c3cc} Jan 23 01:02:38.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:02:38.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de6ec} Jan 23 01:02:48.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:02:48.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317ebdc} Jan 23 01:02:48.867: INFO: RC rc: sending request to consume 0 MB Jan 23 01:02:48.868: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:02:48.868: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:02:48.868: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:02:48.971: INFO: RC rc: sending request to consume 325 millicores Jan 23 01:02:48.971: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:02:58.034: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:02:58.069: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043dec74} Jan 23 01:03:08.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:03:08.064: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c6cc} Jan 23 01:03:18.034: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:03:18.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c8bc} Jan 23 01:03:18.904: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:03:18.904: INFO: RC rc: sending request to consume 0 MB Jan 23 01:03:18.904: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:03:18.904: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:03:19.017: INFO: RC rc: sending request to consume 325 millicores Jan 23 01:03:19.017: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:03:28.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:03:28.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317f1b4} Jan 23 01:03:38.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:03:38.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00317f39c} Jan 23 01:03:48.032: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:03:48.065: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043deffc} Jan 23 01:03:48.940: INFO: RC rc: sending request to consume 0 MB Jan 23 01:03:48.940: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:03:48.940: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:03:48.940: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:03:49.059: INFO: RC rc: sending request to consume 325 millicores Jan 23 01:03:49.059: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:03:58.034: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:03:58.067: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc0043de30c} Jan 23 01:04:08.033: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:04:08.066: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c41c} Jan 23 01:04:08.099: INFO: expecting there to be in [3, 4] replicas (are: 3) Jan 23 01:04:08.132: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2023-01-23 00:54:03 +0000 UTC CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00259c67c} Jan 23 01:04:08.133: INFO: Number of replicas was stable over 10m0s Jan 23 01:04:08.133: INFO: RC rc: consume 10 millicores in total Jan 23 01:04:08.133: INFO: RC rc: setting consumption to 10 millicores in total Jan 23 01:04:08.166: INFO: waiting for 1 replicas (current: 3) Jan 23 01:04:18.976: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:04:18.976: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:04:18.976: INFO: RC rc: sending request to consume 0 MB Jan 23 01:04:18.976: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:04:19.109: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:04:19.109: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:04:28.199: INFO: waiting for 1 replicas (current: 3) Jan 23 01:04:48.200: INFO: waiting for 1 replicas (current: 3) Jan 23 01:04:49.013: INFO: RC rc: sending request to consume 0 MB Jan 23 01:04:49.013: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:04:49.013: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:04:49.013: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:04:49.148: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:04:49.148: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:05:08.201: INFO: waiting for 1 replicas (current: 3) Jan 23 01:05:19.049: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:05:19.049: INFO: RC rc: sending request to consume 0 MB Jan 23 01:05:19.049: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:05:19.049: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:05:19.191: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:05:19.191: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:05:28.202: INFO: waiting for 1 replicas (current: 3) Jan 23 01:05:48.199: INFO: waiting for 1 replicas (current: 3) Jan 23 01:05:49.085: INFO: RC rc: sending request to consume 0 MB Jan 23 01:05:49.085: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:05:49.085: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:05:49.085: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:05:49.233: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:05:49.233: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:06:08.201: INFO: waiting for 1 replicas (current: 3) Jan 23 01:06:19.124: INFO: RC rc: sending request to consume 0 MB Jan 23 01:06:19.124: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:06:19.124: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:06:19.125: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:06:19.273: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:06:19.273: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:06:28.202: INFO: waiting for 1 replicas (current: 3) Jan 23 01:06:48.202: INFO: waiting for 1 replicas (current: 3) Jan 23 01:06:49.160: INFO: RC rc: sending request to consume 0 MB Jan 23 01:06:49.160: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:06:49.161: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:06:49.161: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:06:49.314: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:06:49.315: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:07:08.199: INFO: waiting for 1 replicas (current: 3) Jan 23 01:07:19.200: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:07:19.201: INFO: RC rc: sending request to consume 0 MB Jan 23 01:07:19.201: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:07:19.200: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:07:19.358: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:07:19.358: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:07:28.201: INFO: waiting for 1 replicas (current: 3) Jan 23 01:07:48.199: INFO: waiting for 1 replicas (current: 3) Jan 23 01:07:49.236: INFO: RC rc: sending request to consume 0 MB Jan 23 01:07:49.237: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:07:49.237: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:07:49.237: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:07:49.398: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:07:49.398: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:08:08.201: INFO: waiting for 1 replicas (current: 3) Jan 23 01:08:19.276: INFO: RC rc: sending request to consume 0 MB Jan 23 01:08:19.276: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:08:19.276: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:08:19.277: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:08:19.440: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:08:19.440: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:08:28.201: INFO: waiting for 1 replicas (current: 3) Jan 23 01:08:48.200: INFO: waiting for 1 replicas (current: 3) Jan 23 01:08:49.312: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:08:49.312: INFO: RC rc: sending request to consume 0 MB Jan 23 01:08:49.312: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:08:49.312: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:08:49.481: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:08:49.481: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:09:08.201: INFO: waiting for 1 replicas (current: 3) Jan 23 01:09:19.349: INFO: RC rc: sending request to consume 0 MB Jan 23 01:09:19.349: INFO: RC rc: sending request to consume 0 of custom metric QPS Jan 23 01:09:19.349: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:09:19.349: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:09:19.521: INFO: RC rc: sending request to consume 10 millicores Jan 23 01:09:19.521: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7541/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:09:28.202: INFO: waiting for 1 replicas (current: 2) Jan 23 01:09:48.202: INFO: waiting for 1 replicas (current: 1) �[1mSTEP�[0m: Removing consuming RC rc Jan 23 01:09:48.238: INFO: RC rc: stopping metric consumer Jan 23 01:09:48.239: INFO: RC rc: stopping CPU consumer Jan 23 01:09:48.239: INFO: RC rc: stopping mem consumer �[1mSTEP�[0m: deleting ReplicationController rc in namespace horizontal-pod-autoscaling-7541, will wait for the garbage collector to delete the pods Jan 23 01:09:58.362: INFO: Deleting ReplicationController rc took: 37.103397ms Jan 23 01:09:58.463: INFO: Terminating ReplicationController rc pods took: 100.401073ms �[1mSTEP�[0m: deleting ReplicationController rc-ctrl in namespace horizontal-pod-autoscaling-7541, will wait for the garbage collector to delete the pods Jan 23 01:10:00.158: INFO: Deleting ReplicationController rc-ctrl took: 36.918318ms Jan 23 01:10:00.259: INFO: Terminating ReplicationController rc-ctrl pods took: 101.006217ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 23 01:10:02.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-7541" for this suite. �[32m• [SLOW TEST:1299.912 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicationController �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:59�[0m Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:64�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","total":61,"completed":33,"skipped":4094,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] StatefulSet�[0m �[90mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mBurst scaling should run to completion even with unhealthy pods [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:10:02.123: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP�[0m: Creating service test in namespace statefulset-8264 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-8264 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-8264 Jan 23 01:10:02.476: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 23 01:10:12.513: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 23 01:10:12.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8264 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 01:10:13.129: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 23 01:10:13.129: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 01:10:13.129: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 01:10:13.164: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 23 01:10:23.200: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 01:10:23.200: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 01:10:23.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999962s Jan 23 01:10:24.438: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.929916572s Jan 23 01:10:25.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.891624947s Jan 23 01:10:26.518: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.851277291s Jan 23 01:10:27.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.811917443s Jan 23 01:10:28.596: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.772692883s Jan 23 01:10:29.635: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.73349086s Jan 23 01:10:30.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.694609421s Jan 23 01:10:31.714: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.654410578s Jan 23 01:10:32.753: INFO: Verifying statefulset ss doesn't scale past 3 for another 615.334851ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8264 Jan 23 01:10:33.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8264 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 01:10:34.340: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 23 01:10:34.340: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 01:10:34.340: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 01:10:34.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8264 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 01:10:34.901: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 23 01:10:34.901: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 01:10:34.901: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 01:10:34.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8264 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 01:10:35.405: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 23 01:10:35.405: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 01:10:35.405: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 01:10:35.443: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 01:10:35.444: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 01:10:35.444: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Scale down will not halt with unhealthy stateful pod Jan 23 01:10:35.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8264 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 01:10:36.050: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 23 01:10:36.050: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 01:10:36.050: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 01:10:36.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8264 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 01:10:36.542: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 23 01:10:36.543: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 01:10:36.543: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 01:10:36.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8264 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 01:10:37.064: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 23 01:10:37.064: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 01:10:37.064: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 01:10:37.064: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 01:10:37.097: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 23 01:10:47.168: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 01:10:47.168: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 23 01:10:47.168: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 23 01:10:47.275: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 01:10:47.275: INFO: ss-0 capz-conf-2xrmj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:02 +0000 UTC }] Jan 23 01:10:47.275: INFO: ss-1 capz-conf-96jhk Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:47.275: INFO: ss-2 capz-conf-2xrmj Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:47.275: INFO: Jan 23 01:10:47.275: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 01:10:48.315: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 01:10:48.315: INFO: ss-0 capz-conf-2xrmj Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:02 +0000 UTC }] Jan 23 01:10:48.315: INFO: ss-1 capz-conf-96jhk Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:48.315: INFO: ss-2 capz-conf-2xrmj Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:48.315: INFO: Jan 23 01:10:48.315: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 01:10:49.354: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 01:10:49.354: INFO: ss-0 capz-conf-2xrmj Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:02 +0000 UTC }] Jan 23 01:10:49.354: INFO: ss-1 capz-conf-96jhk Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:49.354: INFO: ss-2 capz-conf-2xrmj Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:49.354: INFO: Jan 23 01:10:49.354: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 01:10:50.393: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 01:10:50.393: INFO: ss-0 capz-conf-2xrmj Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:02 +0000 UTC }] Jan 23 01:10:50.393: INFO: ss-1 capz-conf-96jhk Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:50.393: INFO: ss-2 capz-conf-2xrmj Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:50.393: INFO: Jan 23 01:10:50.393: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 01:10:51.433: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 01:10:51.433: INFO: ss-0 capz-conf-2xrmj Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:02 +0000 UTC }] Jan 23 01:10:51.433: INFO: ss-1 capz-conf-96jhk Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:51.433: INFO: ss-2 capz-conf-2xrmj Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:51.433: INFO: Jan 23 01:10:51.433: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 01:10:52.470: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 01:10:52.470: INFO: ss-2 capz-conf-2xrmj Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:10:23 +0000 UTC }] Jan 23 01:10:52.470: INFO: Jan 23 01:10:52.470: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 23 01:10:53.504: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.766894974s Jan 23 01:10:54.538: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.732918254s Jan 23 01:10:55.572: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.699112961s Jan 23 01:10:56.605: INFO: Verifying statefulset ss doesn't scale past 0 for another 665.591756ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8264 Jan 23 01:10:57.638: INFO: Scaling statefulset ss to 0 Jan 23 01:10:57.767: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 23 01:10:57.799: INFO: Deleting all statefulset in ns statefulset-8264 Jan 23 01:10:57.833: INFO: Scaling statefulset ss to 0 Jan 23 01:10:57.933: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 01:10:57.965: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:188 Jan 23 01:10:58.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8264" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":61,"completed":34,"skipped":4098,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould not be blocked by dependency circle [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:10:58.149: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] test/e2e/framework/framework.go:652 Jan 23 01:10:58.530: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a79b0f02-9d13-4976-8f08-43c056906192", Controller:(*bool)(0xc002703d26), BlockOwnerDeletion:(*bool)(0xc002703d27)}} Jan 23 01:10:58.569: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"acf187d3-d32e-4c0b-92fe-704733e1c609", Controller:(*bool)(0xc002703fd6), BlockOwnerDeletion:(*bool)(0xc002703fd7)}} Jan 23 01:10:58.607: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2456750a-f30f-4480-ae2c-4452b083b784", Controller:(*bool)(0xc0005b37f6), BlockOwnerDeletion:(*bool)(0xc0005b37f7)}} [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:11:03.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-1342" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":61,"completed":35,"skipped":4105,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[90mPriorityClass endpoints�[0m �[1mverify PriorityClass endpoints can be operated with different HTTP methods [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:11:03.764: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 23 01:11:04.102: INFO: Waiting up to 1m0s for all nodes to be ready Jan 23 01:12:04.430: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:12:04.464: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption-path �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/framework/framework.go:652 Jan 23 01:12:04.805: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jan 23 01:12:04.839: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/framework.go:188 Jan 23 01:12:05.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-path-3984" for this suite. [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 23 01:12:05.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-6551" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":61,"completed":36,"skipped":4238,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould not schedule new jobs when ForbidConcurrent [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:12:05.423: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a ForbidConcurrent cronjob �[1mSTEP�[0m: Ensuring a job is scheduled �[1mSTEP�[0m: Ensuring exactly one is scheduled �[1mSTEP�[0m: Ensuring exactly one running job exists by listing jobs explicitly �[1mSTEP�[0m: Ensuring no more jobs are scheduled �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:188 Jan 23 01:18:01.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-3261" for this suite. �[32m• [SLOW TEST:356.559 seconds]�[0m [sig-apps] CronJob �[90mtest/e2e/apps/framework.go:23�[0m should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":61,"completed":37,"skipped":4266,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] ReplicaSet�[0m �[1mShould scale from 1 pod to 3 pods and from 3 to 5�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:50�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:18:01.986: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 1 pod to 3 pods and from 3 to 5 test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 �[1mSTEP�[0m: Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-5586 �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-5586 I0123 01:18:02.307158 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-5586, replica count: 1 �[1mSTEP�[0m: Running controller I0123 01:18:12.359154 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-5586 I0123 01:18:12.438043 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-5586, replica count: 1 I0123 01:18:22.489047 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 01:18:27.489: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 23 01:18:27.522: INFO: RC rs: consume 250 millicores in total Jan 23 01:18:27.522: INFO: RC rs: setting consumption to 250 millicores in total Jan 23 01:18:27.522: INFO: RC rs: sending request to consume 250 millicores Jan 23 01:18:27.522: INFO: RC rs: consume 0 MB in total Jan 23 01:18:27.522: INFO: RC rs: sending request to consume 0 MB Jan 23 01:18:27.522: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5586/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:18:27.522: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5586/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 01:18:27.559: INFO: RC rs: setting consumption to 0 MB in total Jan 23 01:18:27.559: INFO: RC rs: consume custom metric 0 in total Jan 23 01:18:27.559: INFO: RC rs: setting bump of metric QPS to 0 in total Jan 23 01:18:27.559: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 23 01:18:27.559: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5586/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:18:27.645: INFO: waiting for 3 replicas (current: 1) Jan 23 01:18:47.680: INFO: waiting for 3 replicas (current: 3) Jan 23 01:18:47.680: INFO: RC rs: consume 700 millicores in total Jan 23 01:18:47.680: INFO: RC rs: setting consumption to 700 millicores in total Jan 23 01:18:47.713: INFO: waiting for 5 replicas (current: 3) Jan 23 01:18:57.560: INFO: RC rs: sending request to consume 0 MB Jan 23 01:18:57.560: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5586/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:18:57.581: INFO: RC rs: sending request to consume 700 millicores Jan 23 01:18:57.582: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5586/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 23 01:18:57.613: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 23 01:18:57.613: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5586/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:19:07.747: INFO: waiting for 5 replicas (current: 3) Jan 23 01:19:27.596: INFO: RC rs: sending request to consume 0 MB Jan 23 01:19:27.596: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5586/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:19:27.746: INFO: waiting for 5 replicas (current: 5) �[1mSTEP�[0m: Removing consuming RC rs Jan 23 01:19:27.782: INFO: RC rs: stopping metric consumer Jan 23 01:19:27.782: INFO: RC rs: stopping mem consumer Jan 23 01:19:27.782: INFO: RC rs: stopping CPU consumer �[1mSTEP�[0m: deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-5586, will wait for the garbage collector to delete the pods Jan 23 01:19:37.904: INFO: Deleting ReplicaSet.apps rs took: 36.18648ms Jan 23 01:19:38.004: INFO: Terminating ReplicaSet.apps rs pods took: 100.42938ms �[1mSTEP�[0m: deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-5586, will wait for the garbage collector to delete the pods Jan 23 01:19:40.479: INFO: Deleting ReplicationController rs-ctrl took: 35.547538ms Jan 23 01:19:40.579: INFO: Terminating ReplicationController rs-ctrl pods took: 100.516538ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 23 01:19:42.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-5586" for this suite. �[32m•�[0m{"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5","total":61,"completed":38,"skipped":4484,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould fail substituting values in a volume subpath with backticks [Slow] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:19:42.415: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/framework/framework.go:652 Jan 23 01:19:46.748: INFO: Deleting pod "var-expansion-07df5213-61e4-4a88-b139-eb0b322b3a1c" in namespace "var-expansion-7276" Jan 23 01:19:46.786: INFO: Wait up to 5m0s for pod "var-expansion-07df5213-61e4-4a88-b139-eb0b322b3a1c" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 23 01:19:48.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-7276" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":61,"completed":39,"skipped":4564,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] NoExecuteTaintManager Multiple Pods [Serial]�[0m �[1mevicts pods with minTolerationSeconds [Disruptive] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:19:48.934: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename taint-multiple-pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/node/taints.go:348 Jan 23 01:19:49.160: INFO: Waiting up to 1m0s for all nodes to be ready Jan 23 01:20:49.429: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] test/e2e/framework/framework.go:652 Jan 23 01:20:49.463: INFO: Starting informer... �[1mSTEP�[0m: Starting pods... Jan 23 01:20:49.571: INFO: Pod1 is running on capz-conf-96jhk. Tainting Node Jan 23 01:20:55.736: INFO: Pod2 is running on capz-conf-96jhk. Tainting Node �[1mSTEP�[0m: Trying to apply a taint on the Node �[1mSTEP�[0m: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[1mSTEP�[0m: Waiting for Pod1 and Pod2 to be deleted Jan 23 01:21:02.773: INFO: Noticed Pod "taint-eviction-b1" gets evicted. Jan 23 01:21:22.906: INFO: Noticed Pod "taint-eviction-b2" gets evicted. �[1mSTEP�[0m: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/framework.go:188 Jan 23 01:21:23.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "taint-multiple-pods-6775" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":61,"completed":40,"skipped":4574,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould have their auto-restart back-off timer reset on image update [Slow][NodeConformance]�[0m �[37mtest/e2e/common/node/pods.go:682�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:21:23.117: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:191 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] test/e2e/common/node/pods.go:682 Jan 23 01:21:23.416: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 23 01:21:25.450: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 23 01:21:27.450: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Jan 23 01:21:29.451: INFO: The status of Pod pod-back-off-image is Running (Ready = true) �[1mSTEP�[0m: getting restart delay-0 Jan 23 01:23:27.432: INFO: getRestartDelay: restartCount = 4, finishedAt=2023-01-23 01:22:34 +0000 UTC restartedAt=2023-01-23 01:23:25 +0000 UTC (51s) �[1mSTEP�[0m: getting restart delay-1 Jan 23 01:24:55.375: INFO: getRestartDelay: restartCount = 5, finishedAt=2023-01-23 01:23:30 +0000 UTC restartedAt=2023-01-23 01:24:53 +0000 UTC (1m23s) �[1mSTEP�[0m: getting restart delay-2 Jan 23 01:27:43.015: INFO: getRestartDelay: restartCount = 6, finishedAt=2023-01-23 01:24:58 +0000 UTC restartedAt=2023-01-23 01:27:42 +0000 UTC (2m44s) �[1mSTEP�[0m: updating the image Jan 23 01:27:43.593: INFO: Successfully updated pod "pod-back-off-image" �[1mSTEP�[0m: get restart delay after image update Jan 23 01:28:09.150: INFO: getRestartDelay: restartCount = 8, finishedAt=2023-01-23 01:27:51 +0000 UTC restartedAt=2023-01-23 01:28:08 +0000 UTC (17s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:188 Jan 23 01:28:09.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4708" for this suite. �[32m• [SLOW TEST:406.113 seconds]�[0m [sig-node] Pods �[90mtest/e2e/common/node/framework.go:23�[0m should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] �[90mtest/e2e/common/node/pods.go:682�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":61,"completed":41,"skipped":4575,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Cpu Resources [Serial]�[0m �[90mContainer limits�[0m �[1mshould not be exceeded after waiting 2 minutes�[0m �[37mtest/e2e/windows/cpu_limits.go:43�[0m [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:28:09.236: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cpu-resources-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be exceeded after waiting 2 minutes test/e2e/windows/cpu_limits.go:43 �[1mSTEP�[0m: Creating one pod with limit set to '0.5' Jan 23 01:28:09.535: INFO: The status of Pod cpulimittest-2cc9c295-639d-4e91-a6ed-76179a07efb4 is Pending, waiting for it to be Running (with Ready = true) Jan 23 01:28:11.570: INFO: The status of Pod cpulimittest-2cc9c295-639d-4e91-a6ed-76179a07efb4 is Pending, waiting for it to be Running (with Ready = true) Jan 23 01:28:13.571: INFO: The status of Pod cpulimittest-2cc9c295-639d-4e91-a6ed-76179a07efb4 is Pending, waiting for it to be Running (with Ready = true) Jan 23 01:28:15.570: INFO: The status of Pod cpulimittest-2cc9c295-639d-4e91-a6ed-76179a07efb4 is Running (Ready = true) �[1mSTEP�[0m: Creating one pod with limit set to '500m' Jan 23 01:28:15.674: INFO: The status of Pod cpulimittest-60672de6-3a90-4b19-b7e5-f5ccba569c85 is Pending, waiting for it to be Running (with Ready = true) Jan 23 01:28:17.710: INFO: The status of Pod cpulimittest-60672de6-3a90-4b19-b7e5-f5ccba569c85 is Pending, waiting for it to be Running (with Ready = true) Jan 23 01:28:19.710: INFO: The status of Pod cpulimittest-60672de6-3a90-4b19-b7e5-f5ccba569c85 is Pending, waiting for it to be Running (with Ready = true) Jan 23 01:28:21.709: INFO: The status of Pod cpulimittest-60672de6-3a90-4b19-b7e5-f5ccba569c85 is Running (Ready = true) �[1mSTEP�[0m: Waiting 2 minutes �[1mSTEP�[0m: Ensuring pods are still running �[1mSTEP�[0m: Ensuring cpu doesn't exceed limit by >5% �[1mSTEP�[0m: Gathering node summary stats Jan 23 01:30:22.009: INFO: Pod cpulimittest-2cc9c295-639d-4e91-a6ed-76179a07efb4 usage: 0.45494521200000004 �[1mSTEP�[0m: Gathering node summary stats Jan 23 01:30:22.150: INFO: Pod cpulimittest-60672de6-3a90-4b19-b7e5-f5ccba569c85 usage: 0.439902978 [AfterEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/framework.go:188 Jan 23 01:30:22.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cpu-resources-test-windows-7409" for this suite. �[32m• [SLOW TEST:132.993 seconds]�[0m [sig-windows] [Feature:Windows] Cpu Resources [Serial] �[90mtest/e2e/windows/framework.go:27�[0m Container limits �[90mtest/e2e/windows/cpu_limits.go:42�[0m should not be exceeded after waiting 2 minutes �[90mtest/e2e/windows/cpu_limits.go:43�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","total":61,"completed":42,"skipped":4840,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[90mPreemptionExecutionPath�[0m �[1mruns ReplicaSets to verify preemption running path [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:30:22.233: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 23 01:30:22.572: INFO: Waiting up to 1m0s for all nodes to be ready Jan 23 01:31:22.884: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:31:22.918: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption-path �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:496 �[1mSTEP�[0m: Finding an available node �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. Jan 23 01:31:27.331: INFO: found a healthy node: capz-conf-96jhk [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/framework/framework.go:652 Jan 23 01:31:41.865: INFO: pods created so far: [1 1 1] Jan 23 01:31:41.865: INFO: length of pods created so far: 3 Jan 23 01:31:45.941: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/framework.go:188 Jan 23 01:31:52.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-path-7365" for this suite. [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:470 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 23 01:31:53.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-2003" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":61,"completed":43,"skipped":4908,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete jobs and pods created by cronjob�[0m �[37mtest/e2e/apimachinery/garbage_collector.go:1145�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:31:53.483: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete jobs and pods created by cronjob test/e2e/apimachinery/garbage_collector.go:1145 �[1mSTEP�[0m: Create the cronjob �[1mSTEP�[0m: Wait for the CronJob to create new Job �[1mSTEP�[0m: Delete the cronjob �[1mSTEP�[0m: Verify if cronjob does not leave jobs nor pods behind �[1mSTEP�[0m: Gathering metrics Jan 23 01:32:00.610: INFO: The status of Pod kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj is Running (Ready = true) Jan 23 01:32:00.975: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:32:00.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-2028" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":61,"completed":44,"skipped":4947,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow]�[0m �[90mAllocatable node memory�[0m �[1mshould be equal to a calculated allocatable memory value�[0m �[37mtest/e2e/windows/memory_limits.go:54�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:32:01.053: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename memory-limit-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should be equal to a calculated allocatable memory value test/e2e/windows/memory_limits.go:54 �[1mSTEP�[0m: Getting memory details from node status and kubelet config Jan 23 01:32:01.318: INFO: Getting configuration details for node capz-conf-2xrmj Jan 23 01:32:01.366: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}} �[1mSTEP�[0m: Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>} BinarySI} [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 23 01:32:01.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "memory-limit-test-windows-9668" for this suite. �[32m•�[0m{"msg":"PASSED [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] Allocatable node memory should be equal to a calculated allocatable memory value","total":61,"completed":45,"skipped":5211,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:32:01.442: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the rc1 �[1mSTEP�[0m: create the rc2 �[1mSTEP�[0m: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[1mSTEP�[0m: delete the rc simpletest-rc-to-be-deleted �[1mSTEP�[0m: wait for the rc to be deleted Jan 23 01:32:14.014: INFO: 68 pods remaining Jan 23 01:32:14.014: INFO: 68 pods has nil DeletionTimestamp Jan 23 01:32:14.014: INFO: �[1mSTEP�[0m: Gathering metrics Jan 23 01:32:19.102: INFO: The status of Pod kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj is Running (Ready = true) Jan 23 01:32:19.477: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 23 01:32:19.478: INFO: Deleting pod "simpletest-rc-to-be-deleted-2f6n5" in namespace "gc-8821" Jan 23 01:32:19.523: INFO: Deleting pod "simpletest-rc-to-be-deleted-45ltr" in namespace "gc-8821" Jan 23 01:32:19.569: INFO: Deleting pod "simpletest-rc-to-be-deleted-47vjg" in namespace "gc-8821" Jan 23 01:32:19.617: INFO: Deleting pod "simpletest-rc-to-be-deleted-48h7d" in namespace "gc-8821" Jan 23 01:32:19.663: INFO: Deleting pod "simpletest-rc-to-be-deleted-4bcv2" in namespace "gc-8821" Jan 23 01:32:19.705: INFO: Deleting pod "simpletest-rc-to-be-deleted-4cr68" in namespace "gc-8821" Jan 23 01:32:19.749: INFO: Deleting pod "simpletest-rc-to-be-deleted-4djbj" in namespace "gc-8821" Jan 23 01:32:19.792: INFO: Deleting pod "simpletest-rc-to-be-deleted-4vshg" in namespace "gc-8821" Jan 23 01:32:19.834: INFO: Deleting pod "simpletest-rc-to-be-deleted-547rk" in namespace "gc-8821" Jan 23 01:32:19.887: INFO: Deleting pod "simpletest-rc-to-be-deleted-5hg9s" in namespace "gc-8821" Jan 23 01:32:19.927: INFO: Deleting pod "simpletest-rc-to-be-deleted-5jmpr" in namespace "gc-8821" Jan 23 01:32:19.971: INFO: Deleting pod "simpletest-rc-to-be-deleted-5jrz8" in namespace "gc-8821" Jan 23 01:32:20.017: INFO: Deleting pod "simpletest-rc-to-be-deleted-5scch" in namespace "gc-8821" Jan 23 01:32:20.066: INFO: Deleting pod "simpletest-rc-to-be-deleted-5zcr9" in namespace "gc-8821" Jan 23 01:32:20.110: INFO: Deleting pod "simpletest-rc-to-be-deleted-6dz6c" in namespace "gc-8821" Jan 23 01:32:20.154: INFO: Deleting pod "simpletest-rc-to-be-deleted-6pwlq" in namespace "gc-8821" Jan 23 01:32:20.200: INFO: Deleting pod "simpletest-rc-to-be-deleted-72t75" in namespace "gc-8821" Jan 23 01:32:20.244: INFO: Deleting pod "simpletest-rc-to-be-deleted-79qf2" in namespace "gc-8821" Jan 23 01:32:20.285: INFO: Deleting pod "simpletest-rc-to-be-deleted-7dpfh" in namespace "gc-8821" Jan 23 01:32:20.341: INFO: Deleting pod "simpletest-rc-to-be-deleted-7jdxv" in namespace "gc-8821" Jan 23 01:32:20.389: INFO: Deleting pod "simpletest-rc-to-be-deleted-7k97r" in namespace "gc-8821" Jan 23 01:32:20.431: INFO: Deleting pod "simpletest-rc-to-be-deleted-7rjf2" in namespace "gc-8821" Jan 23 01:32:20.481: INFO: Deleting pod "simpletest-rc-to-be-deleted-82zph" in namespace "gc-8821" Jan 23 01:32:20.524: INFO: Deleting pod "simpletest-rc-to-be-deleted-85h5b" in namespace "gc-8821" Jan 23 01:32:20.566: INFO: Deleting pod "simpletest-rc-to-be-deleted-8c5z5" in namespace "gc-8821" Jan 23 01:32:20.609: INFO: Deleting pod "simpletest-rc-to-be-deleted-8db5d" in namespace "gc-8821" Jan 23 01:32:20.649: INFO: Deleting pod "simpletest-rc-to-be-deleted-8dkdl" in namespace "gc-8821" Jan 23 01:32:20.691: INFO: Deleting pod "simpletest-rc-to-be-deleted-8ln2n" in namespace "gc-8821" Jan 23 01:32:20.739: INFO: Deleting pod "simpletest-rc-to-be-deleted-9592w" in namespace "gc-8821" Jan 23 01:32:20.781: INFO: Deleting pod "simpletest-rc-to-be-deleted-9lntx" in namespace "gc-8821" Jan 23 01:32:20.826: INFO: Deleting pod "simpletest-rc-to-be-deleted-bgqln" in namespace "gc-8821" Jan 23 01:32:20.868: INFO: Deleting pod "simpletest-rc-to-be-deleted-bnw76" in namespace "gc-8821" Jan 23 01:32:20.912: INFO: Deleting pod "simpletest-rc-to-be-deleted-bwvbd" in namespace "gc-8821" Jan 23 01:32:20.964: INFO: Deleting pod "simpletest-rc-to-be-deleted-cw26k" in namespace "gc-8821" Jan 23 01:32:21.008: INFO: Deleting pod "simpletest-rc-to-be-deleted-cwwwl" in namespace "gc-8821" Jan 23 01:32:21.052: INFO: Deleting pod "simpletest-rc-to-be-deleted-dd59t" in namespace "gc-8821" Jan 23 01:32:21.096: INFO: Deleting pod "simpletest-rc-to-be-deleted-dthmq" in namespace "gc-8821" Jan 23 01:32:21.136: INFO: Deleting pod "simpletest-rc-to-be-deleted-dw2sn" in namespace "gc-8821" Jan 23 01:32:21.184: INFO: Deleting pod "simpletest-rc-to-be-deleted-f2qrj" in namespace "gc-8821" Jan 23 01:32:21.223: INFO: Deleting pod "simpletest-rc-to-be-deleted-f5m5f" in namespace "gc-8821" Jan 23 01:32:21.270: INFO: Deleting pod "simpletest-rc-to-be-deleted-ffgh4" in namespace "gc-8821" Jan 23 01:32:21.314: INFO: Deleting pod "simpletest-rc-to-be-deleted-fk4z2" in namespace "gc-8821" Jan 23 01:32:21.353: INFO: Deleting pod "simpletest-rc-to-be-deleted-fwlnp" in namespace "gc-8821" Jan 23 01:32:21.396: INFO: Deleting pod "simpletest-rc-to-be-deleted-g2vtm" in namespace "gc-8821" Jan 23 01:32:21.441: INFO: Deleting pod "simpletest-rc-to-be-deleted-g576h" in namespace "gc-8821" Jan 23 01:32:21.488: INFO: Deleting pod "simpletest-rc-to-be-deleted-g69fk" in namespace "gc-8821" Jan 23 01:32:21.528: INFO: Deleting pod "simpletest-rc-to-be-deleted-gjk6z" in namespace "gc-8821" Jan 23 01:32:21.566: INFO: Deleting pod "simpletest-rc-to-be-deleted-gnfg6" in namespace "gc-8821" Jan 23 01:32:21.606: INFO: Deleting pod "simpletest-rc-to-be-deleted-gvhc6" in namespace "gc-8821" Jan 23 01:32:21.653: INFO: Deleting pod "simpletest-rc-to-be-deleted-gxr2s" in namespace "gc-8821" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:32:21.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-8821" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":61,"completed":46,"skipped":5359,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:32:21.782: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-pred �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 23 01:32:22.034: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 01:32:22.104: INFO: Waiting for terminating namespaces to be deleted... Jan 23 01:32:22.137: INFO: Logging pods the apiserver thinks is on node capz-conf-2xrmj before test Jan 23 01:32:22.174: INFO: calico-node-windows-v55p5 from calico-system started at 2023-01-22 23:20:32 +0000 UTC (2 container statuses recorded) Jan 23 01:32:22.174: INFO: Container calico-node-felix ready: true, restart count 0 Jan 23 01:32:22.175: INFO: Container calico-node-startup ready: true, restart count 0 Jan 23 01:32:22.175: INFO: containerd-logger-h7zw5 from kube-system started at 2023-01-22 23:20:32 +0000 UTC (1 container statuses recorded) Jan 23 01:32:22.175: INFO: Container containerd-logger ready: true, restart count 0 Jan 23 01:32:22.175: INFO: csi-azuredisk-node-win-b7hkf from kube-system started at 2023-01-22 23:21:02 +0000 UTC (3 container statuses recorded) Jan 23 01:32:22.175: INFO: Container azuredisk ready: true, restart count 0 Jan 23 01:32:22.175: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 01:32:22.175: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 01:32:22.175: INFO: csi-proxy-x5wwz from kube-system started at 2023-01-22 23:21:02 +0000 UTC (1 container statuses recorded) Jan 23 01:32:22.175: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 01:32:22.175: INFO: kube-proxy-windows-bms6h from kube-system started at 2023-01-22 23:20:32 +0000 UTC (1 container statuses recorded) Jan 23 01:32:22.175: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 01:32:22.175: INFO: Logging pods the apiserver thinks is on node capz-conf-96jhk before test Jan 23 01:32:22.213: INFO: calico-node-windows-b54b2 from calico-system started at 2023-01-22 23:19:35 +0000 UTC (2 container statuses recorded) Jan 23 01:32:22.213: INFO: Container calico-node-felix ready: true, restart count 0 Jan 23 01:32:22.213: INFO: Container calico-node-startup ready: true, restart count 0 Jan 23 01:32:22.213: INFO: containerd-logger-k8bhm from kube-system started at 2023-01-22 23:19:35 +0000 UTC (1 container statuses recorded) Jan 23 01:32:22.213: INFO: Container containerd-logger ready: true, restart count 0 Jan 23 01:32:22.213: INFO: csi-azuredisk-node-win-vhcrv from kube-system started at 2023-01-23 01:21:23 +0000 UTC (3 container statuses recorded) Jan 23 01:32:22.213: INFO: Container azuredisk ready: true, restart count 0 Jan 23 01:32:22.213: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 01:32:22.213: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 01:32:22.214: INFO: csi-proxy-llbbf from kube-system started at 2023-01-23 01:21:23 +0000 UTC (1 container statuses recorded) Jan 23 01:32:22.214: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 01:32:22.214: INFO: kube-proxy-windows-mrr95 from kube-system started at 2023-01-22 23:19:35 +0000 UTC (1 container statuses recorded) Jan 23 01:32:22.214: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. Jan 23 01:33:22.357: FAIL: Unexpected error: <*errors.errorString | 0xc00021c1e0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.runPausePodWithTimeout(0xc000a2f080, {{0x718eba2, 0xd}, {0x0, 0x0}, 0x0, 0x0, 0x0, 0x0, 0x0, ...}, ...) test/e2e/scheduling/predicates.go:883 +0xcd k8s.io/kubernetes/test/e2e/scheduling.runPausePod(...) test/e2e/scheduling/predicates.go:878 k8s.io/kubernetes/test/e2e/scheduling.runPodAndGetNodeName(0xc000a2f080, {{0x718eba2, 0xd}, {0x0, 0x0}, 0x0, 0x0, 0x0, 0x0, 0x0, ...}) test/e2e/scheduling/predicates.go:894 +0x6c k8s.io/kubernetes/test/e2e/scheduling.GetNodeThatCanRunPod(0xc00404e700?) test/e2e/scheduling/predicates.go:966 +0x85 k8s.io/kubernetes/test/e2e/scheduling.glob..func4.13() test/e2e/scheduling/predicates.go:700 +0x66 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000503040, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 �[1mSTEP�[0m: Collecting events from namespace "sched-pred-2277". �[1mSTEP�[0m: Found 3 events. Jan 23 01:33:22.391: INFO: At 2023-01-23 01:32:22 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-pred-2277/without-label to capz-conf-2xrmj Jan 23 01:33:22.391: INFO: At 2023-01-23 01:33:15 +0000 UTC - event for without-label: {kubelet capz-conf-2xrmj} Pulled: Container image "k8s.gcr.io/pause:3.7" already present on machine Jan 23 01:33:22.392: INFO: At 2023-01-23 01:33:16 +0000 UTC - event for without-label: {kubelet capz-conf-2xrmj} Created: Created container without-label Jan 23 01:33:22.426: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 01:33:22.426: INFO: without-label capz-conf-2xrmj Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:32:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:32:22 +0000 UTC ContainersNotReady containers with unready status: [without-label]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:32:22 +0000 UTC ContainersNotReady containers with unready status: [without-label]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-23 01:32:22 +0000 UTC }] Jan 23 01:33:22.426: INFO: Jan 23 01:33:22.460: INFO: Unable to fetch sched-pred-2277/without-label/without-label logs: the server rejected our request for an unknown reason (get pods without-label) Jan 23 01:33:22.495: INFO: Logging node info for node capz-conf-2xrmj Jan 23 01:33:22.529: INFO: Node Info: &Node{ObjectMeta:{capz-conf-2xrmj 5247e4e6-f587-4010-8bd9-f81765117274 34625 0 2023-01-22 23:20:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:canadacentral failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-2xrmj kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.disk.csi.azure.com/zone: topology.kubernetes.io/region:canadacentral topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-zs64h3 cluster.x-k8s.io/cluster-namespace:capz-conf-zs64h3 cluster.x-k8s.io/machine:capz-conf-zs64h3-md-win-67dfd985d8-q945m cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-zs64h3-md-win-67dfd985d8 csi.volume.kubernetes.io/nodeid:{"disk.csi.azure.com":"capz-conf-2xrmj"} kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.14.1 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:18:9e:ac volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-01-22 23:20:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2023-01-22 23:20:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-22 23:20:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {calico-node.exe Update v1 2023-01-22 23:21:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {manager Update v1 2023-01-22 23:21:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet.exe Update v1 2023-01-23 01:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-zs64h3/providers/Microsoft.Compute/virtualMachines/capz-conf-2xrmj,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-23 01:32:31 +0000 UTC,LastTransitionTime:2023-01-22 23:20:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-23 01:32:31 +0000 UTC,LastTransitionTime:2023-01-22 23:20:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-23 01:32:31 +0000 UTC,LastTransitionTime:2023-01-22 23:20:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-23 01:32:31 +0000 UTC,LastTransitionTime:2023-01-22 23:21:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-2xrmj,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-2xrmj,SystemUUID:D8513319-5343-4937-9668-EFC8403F40BB,BootID:9,KernelVersion:10.0.17763.3770,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.24.11-rc.0.6+7c685ed7305e76,KubeProxyVersion:v1.24.11-rc.0.6+7c685ed7305e76,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:907b259fe0c9f5adda9f00a91b8a8228f4f38768021fb6d05cbad0538ef8f99a mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.1],SizeBytes:130115533,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.24.11-rc.0.6_7c685ed7305e76-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:515b883deb0ae8d58eef60312f4d460ff8a3f52a2a5e487c94a8ebb2ca362720 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2],SizeBytes:112797444,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:fcb73e1939d9abeb2d1e1680b476a10a422a04a73ea5a65e64eec3fde1f2a5a1 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0],SizeBytes:111834447,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/resource-consumer@sha256:89f16100a57624bfa729b9e50c941b46a4fdceaa8818b96bdad6cab8ff44ca45 k8s.gcr.io/e2e-test-images/resource-consumer:1.10],SizeBytes:105490980,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:2082c9b6488b3a2839141f472740c36484d5cbc91f7c24d67bc77ea311d4602b docker.io/sigwindowstools/calico-install:v3.24.5-hostprocess],SizeBytes:49820336,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ba0ac4633a832430a00374ef6cf1c701797017b8d09ccc3fb12db253e250887a docker.io/sigwindowstools/calico-node:v3.24.5-hostprocess],SizeBytes:28623190,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 23 01:33:22.529: INFO: Logging kubelet events for node capz-conf-2xrmj Jan 23 01:33:22.562: INFO: Logging pods the kubelet thinks is on node capz-conf-2xrmj Jan 23 01:33:22.603: INFO: without-label started at 2023-01-23 01:32:22 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:22.603: INFO: Container without-label ready: false, restart count 0 Jan 23 01:33:22.603: INFO: kube-proxy-windows-bms6h started at 2023-01-22 23:20:32 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:22.603: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 01:33:22.603: INFO: csi-proxy-x5wwz started at 2023-01-22 23:21:02 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:22.603: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 01:33:22.603: INFO: calico-node-windows-v55p5 started at 2023-01-22 23:20:32 +0000 UTC (1+2 container statuses recorded) Jan 23 01:33:22.603: INFO: Init container install-cni ready: true, restart count 0 Jan 23 01:33:22.603: INFO: Container calico-node-felix ready: true, restart count 0 Jan 23 01:33:22.603: INFO: Container calico-node-startup ready: true, restart count 0 Jan 23 01:33:22.603: INFO: containerd-logger-h7zw5 started at 2023-01-22 23:20:32 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:22.603: INFO: Container containerd-logger ready: true, restart count 0 Jan 23 01:33:22.603: INFO: csi-azuredisk-node-win-b7hkf started at 2023-01-22 23:21:02 +0000 UTC (1+3 container statuses recorded) Jan 23 01:33:22.603: INFO: Init container init ready: true, restart count 0 Jan 23 01:33:22.603: INFO: Container azuredisk ready: true, restart count 0 Jan 23 01:33:22.603: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 01:33:22.603: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 01:33:23.353: INFO: Latency metrics for node capz-conf-2xrmj Jan 23 01:33:23.353: INFO: Logging node info for node capz-conf-96jhk Jan 23 01:33:23.386: INFO: Node Info: &Node{ObjectMeta:{capz-conf-96jhk aa6522c7-d77d-4487-bac3-a0aaa08e6291 32935 0 2023-01-22 23:19:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:canadacentral failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-96jhk kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.disk.csi.azure.com/zone: topology.kubernetes.io/region:canadacentral topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-zs64h3 cluster.x-k8s.io/cluster-namespace:capz-conf-zs64h3 cluster.x-k8s.io/machine:capz-conf-zs64h3-md-win-67dfd985d8-q88x8 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-zs64h3-md-win-67dfd985d8 csi.volume.kubernetes.io/nodeid:{"disk.csi.azure.com":"capz-conf-96jhk"} kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.198.1 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:77:00:27 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2023-01-22 23:19:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-22 23:19:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2023-01-22 23:19:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {calico-node.exe Update v1 2023-01-22 23:20:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {manager Update v1 2023-01-22 23:21:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {e2e.test Update v1 2023-01-23 01:31:27 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kubelet.exe Update v1 2023-01-23 01:31:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-zs64h3/providers/Microsoft.Compute/virtualMachines/capz-conf-96jhk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-23 01:31:29 +0000 UTC,LastTransitionTime:2023-01-22 23:19:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-23 01:31:29 +0000 UTC,LastTransitionTime:2023-01-22 23:19:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-23 01:31:29 +0000 UTC,LastTransitionTime:2023-01-22 23:19:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-23 01:31:29 +0000 UTC,LastTransitionTime:2023-01-22 23:20:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-96jhk,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-96jhk,SystemUUID:0CA14407-155A-4A29-A3D7-E9B2155962EB,BootID:9,KernelVersion:10.0.17763.3770,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.24.11-rc.0.6+7c685ed7305e76,KubeProxyVersion:v1.24.11-rc.0.6+7c685ed7305e76,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:907b259fe0c9f5adda9f00a91b8a8228f4f38768021fb6d05cbad0538ef8f99a mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.1],SizeBytes:130115533,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.24.11-rc.0.6_7c685ed7305e76-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:515b883deb0ae8d58eef60312f4d460ff8a3f52a2a5e487c94a8ebb2ca362720 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2],SizeBytes:112797444,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:fcb73e1939d9abeb2d1e1680b476a10a422a04a73ea5a65e64eec3fde1f2a5a1 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0],SizeBytes:111834447,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/resource-consumer@sha256:89f16100a57624bfa729b9e50c941b46a4fdceaa8818b96bdad6cab8ff44ca45 k8s.gcr.io/e2e-test-images/resource-consumer:1.10],SizeBytes:105490980,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:2082c9b6488b3a2839141f472740c36484d5cbc91f7c24d67bc77ea311d4602b docker.io/sigwindowstools/calico-install:v3.24.5-hostprocess],SizeBytes:49820336,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ba0ac4633a832430a00374ef6cf1c701797017b8d09ccc3fb12db253e250887a docker.io/sigwindowstools/calico-node:v3.24.5-hostprocess],SizeBytes:28623190,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 23 01:33:23.387: INFO: Logging kubelet events for node capz-conf-96jhk Jan 23 01:33:23.419: INFO: Logging pods the kubelet thinks is on node capz-conf-96jhk Jan 23 01:33:23.473: INFO: csi-azuredisk-node-win-vhcrv started at 2023-01-23 01:21:23 +0000 UTC (1+3 container statuses recorded) Jan 23 01:33:23.473: INFO: Init container init ready: true, restart count 0 Jan 23 01:33:23.473: INFO: Container azuredisk ready: true, restart count 0 Jan 23 01:33:23.473: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 01:33:23.473: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 01:33:23.473: INFO: csi-proxy-llbbf started at 2023-01-23 01:21:23 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:23.473: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 01:33:23.473: INFO: calico-node-windows-b54b2 started at 2023-01-22 23:19:35 +0000 UTC (1+2 container statuses recorded) Jan 23 01:33:23.473: INFO: Init container install-cni ready: true, restart count 0 Jan 23 01:33:23.473: INFO: Container calico-node-felix ready: true, restart count 0 Jan 23 01:33:23.473: INFO: Container calico-node-startup ready: true, restart count 0 Jan 23 01:33:23.473: INFO: kube-proxy-windows-mrr95 started at 2023-01-22 23:19:35 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:23.473: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 01:33:23.473: INFO: containerd-logger-k8bhm started at 2023-01-22 23:19:35 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:23.473: INFO: Container containerd-logger ready: true, restart count 0 Jan 23 01:33:27.169: INFO: Latency metrics for node capz-conf-96jhk Jan 23 01:33:27.169: INFO: Logging node info for node capz-conf-zs64h3-control-plane-dlccj Jan 23 01:33:27.202: INFO: Node Info: &Node{ObjectMeta:{capz-conf-zs64h3-control-plane-dlccj 11794dd0-fec3-41c1-9869-8f228fa5f8f1 33006 0 2023-01-22 23:16:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_B2s beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:canadacentral failure-domain.beta.kubernetes.io/zone:canadacentral-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-zs64h3-control-plane-dlccj kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_B2s topology.disk.csi.azure.com/zone:canadacentral-1 topology.kubernetes.io/region:canadacentral topology.kubernetes.io/zone:canadacentral-1] map[cluster.x-k8s.io/cluster-name:capz-conf-zs64h3 cluster.x-k8s.io/cluster-namespace:capz-conf-zs64h3 cluster.x-k8s.io/machine:capz-conf-zs64h3-control-plane-t2n9m cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-zs64h3-control-plane csi.volume.kubernetes.io/nodeid:{"csi.tigera.io":"capz-conf-zs64h3-control-plane-dlccj","disk.csi.azure.com":"capz-conf-zs64h3-control-plane-dlccj"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.221.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-22 23:16:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-22 23:16:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2023-01-22 23:17:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2023-01-22 23:17:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {calico-node Update v1 2023-01-22 23:17:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-22 23:18:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-zs64h3/providers/Microsoft.Compute/virtualMachines/capz-conf-zs64h3-control-plane-dlccj,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4123176960 0} {<nil>} 4026540Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4018319360 0} {<nil>} 3924140Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-22 23:17:41 +0000 UTC,LastTransitionTime:2023-01-22 23:17:41 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-23 01:31:39 +0000 UTC,LastTransitionTime:2023-01-22 23:16:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-23 01:31:39 +0000 UTC,LastTransitionTime:2023-01-22 23:16:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-23 01:31:39 +0000 UTC,LastTransitionTime:2023-01-22 23:16:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-23 01:31:39 +0000 UTC,LastTransitionTime:2023-01-22 23:17:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-zs64h3-control-plane-dlccj,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1069622011f4044bb01ce02fbed0d74,SystemUUID:bea0d9c9-574c-2649-bb33-0a4a0e606c46,BootID:50c65880-43d9-48c6-8b47-39080e7d7005,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.6+7c685ed7305e76,KubeProxyVersion:v1.24.11-rc.0.6+7c685ed7305e76,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-apiserver-amd64:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-apiserver:v1.24.11-rc.0.6_7c685ed7305e76],SizeBytes:131733971,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-controller-manager-amd64:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-controller-manager:v1.24.11-rc.0.6_7c685ed7305e76],SizeBytes:121342265,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-proxy:v1.24.11-rc.0.6_7c685ed7305e76],SizeBytes:112212023,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:907b259fe0c9f5adda9f00a91b8a8228f4f38768021fb6d05cbad0538ef8f99a mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.1],SizeBytes:96300330,},ContainerImage{Names:[docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977 docker.io/calico/cni:v3.25.0],SizeBytes:87984941,},ContainerImage{Names:[docker.io/calico/node@sha256:a85123d1882832af6c45b5e289c6bb99820646cb7d4f6006f98095168808b1e6 docker.io/calico/node:v3.25.0],SizeBytes:87185935,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:3ef7d954946bd1cf9e5e3564a8d1acf8e5852616f7ae96bcbc5ced8c275483ee mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0],SizeBytes:61391360,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:9ba6483d2f8aa6051cb3a50e42d638fc17a6e4699a6689f054969024b7c12944 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0],SizeBytes:58560473,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:bc317fea7e7bbaff65130d7ac6ea7c96bc15eb1f086374b8c3359f11988ac024 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0],SizeBytes:57948644,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-scheduler-amd64:v1.24.11-rc.0.6_7c685ed7305e76 registry.k8s.io/kube-scheduler:v1.24.11-rc.0.6_7c685ed7305e76],SizeBytes:52751160,},ContainerImage{Names:[docker.io/calico/apiserver@sha256:9819c1b569e60eec4dbab82c1b41cee80fe8af282b25ba2c174b2a00ae555af6 docker.io/calico/apiserver:v3.25.0],SizeBytes:35624155,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:c45af3a9692d87a527451cf544557138fedf86f92b6e39bf2003e2fdb848dce3 docker.io/calico/kube-controllers:v3.25.0],SizeBytes:31271800,},ContainerImage{Names:[docker.io/calico/typha@sha256:f7e0557e03f422c8ba5fcf64ef0fac054ee99935b5d101a0a50b5e9b65f6a5c5 docker.io/calico/typha:v3.25.0],SizeBytes:28533187,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:a889e925e15f9423f7842f1b769f64cbcf6a20b6956122836fc835cf22d9073f mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1],SizeBytes:22192414,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[quay.io/tigera/operator@sha256:89eef35e1bbe8c88792ce69c3f3f38fb9838e58602c570524350b5f3ab127582 quay.io/tigera/operator:v1.29.0],SizeBytes:21108896,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[docker.io/calico/node-driver-registrar@sha256:f559ee53078266d2126732303f588b9d4266607088e457ea04286f31727676f7 docker.io/calico/node-driver-registrar:v3.25.0],SizeBytes:11133658,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:515b883deb0ae8d58eef60312f4d460ff8a3f52a2a5e487c94a8ebb2ca362720 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2],SizeBytes:10076715,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:fcb73e1939d9abeb2d1e1680b476a10a422a04a73ea5a65e64eec3fde1f2a5a1 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0],SizeBytes:9117963,},ContainerImage{Names:[docker.io/calico/csi@sha256:61a95f3ee79a7e591aff9eff535be73e62d2c3931d07c2ea8a1305f7bea19b31 docker.io/calico/csi:v3.25.0],SizeBytes:9076936,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:01ddd57d428787b3ac689daa685660defe4bd7810069544bd43a9103a7b0a789 docker.io/calico/pod2daemon-flexvol:v3.25.0],SizeBytes:7076045,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 23 01:33:27.203: INFO: Logging kubelet events for node capz-conf-zs64h3-control-plane-dlccj Jan 23 01:33:27.236: INFO: Logging pods the kubelet thinks is on node capz-conf-zs64h3-control-plane-dlccj Jan 23 01:33:27.299: INFO: kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj started at 2023-01-22 23:16:53 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container kube-controller-manager ready: true, restart count 0 Jan 23 01:33:27.299: INFO: kube-proxy-76knr started at 2023-01-22 23:17:08 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 01:33:27.299: INFO: tigera-operator-65d6bf4d4f-kmtvm started at 2023-01-22 23:16:56 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container tigera-operator ready: true, restart count 0 Jan 23 01:33:27.299: INFO: calico-node-4wzwq started at 2023-01-22 23:17:03 +0000 UTC (2+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 23 01:33:27.299: INFO: Init container install-cni ready: true, restart count 0 Jan 23 01:33:27.299: INFO: Container calico-node ready: true, restart count 0 Jan 23 01:33:27.299: INFO: csi-node-driver-zxc7j started at 2023-01-22 23:17:35 +0000 UTC (0+2 container statuses recorded) Jan 23 01:33:27.299: INFO: Container calico-csi ready: true, restart count 0 Jan 23 01:33:27.299: INFO: Container csi-node-driver-registrar ready: true, restart count 0 Jan 23 01:33:27.299: INFO: calico-apiserver-7f7758c56-4445b started at 2023-01-22 23:18:03 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container calico-apiserver ready: true, restart count 0 Jan 23 01:33:27.299: INFO: csi-azuredisk-node-sbg2s started at 2023-01-22 23:18:22 +0000 UTC (0+3 container statuses recorded) Jan 23 01:33:27.299: INFO: Container azuredisk ready: true, restart count 0 Jan 23 01:33:27.299: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 01:33:27.299: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 01:33:27.299: INFO: coredns-57575c5f89-vfrs6 started at 2023-01-22 23:17:35 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container coredns ready: true, restart count 0 Jan 23 01:33:27.299: INFO: calico-kube-controllers-594d54f99-r76g2 started at 2023-01-22 23:17:35 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 23 01:33:27.299: INFO: kube-scheduler-capz-conf-zs64h3-control-plane-dlccj started at 2023-01-22 23:16:53 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container kube-scheduler ready: true, restart count 0 Jan 23 01:33:27.299: INFO: calico-typha-646b464966-bv45m started at 2023-01-22 23:17:03 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container calico-typha ready: true, restart count 0 Jan 23 01:33:27.299: INFO: metrics-server-7d674f87b8-4bpgn started at 2023-01-22 23:17:35 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container metrics-server ready: true, restart count 0 Jan 23 01:33:27.299: INFO: coredns-57575c5f89-xkzsq started at 2023-01-22 23:17:35 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container coredns ready: true, restart count 0 Jan 23 01:33:27.299: INFO: calico-apiserver-7f7758c56-gzr5r started at 2023-01-22 23:18:02 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container calico-apiserver ready: true, restart count 0 Jan 23 01:33:27.299: INFO: csi-azuredisk-controller-545d478dbf-v86xv started at 2023-01-22 23:18:22 +0000 UTC (0+6 container statuses recorded) Jan 23 01:33:27.299: INFO: Container azuredisk ready: true, restart count 0 Jan 23 01:33:27.299: INFO: Container csi-attacher ready: true, restart count 0 Jan 23 01:33:27.299: INFO: Container csi-provisioner ready: true, restart count 0 Jan 23 01:33:27.299: INFO: Container csi-resizer ready: true, restart count 0 Jan 23 01:33:27.299: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 23 01:33:27.299: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 01:33:27.299: INFO: etcd-capz-conf-zs64h3-control-plane-dlccj started at 2023-01-22 23:16:53 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container etcd ready: true, restart count 0 Jan 23 01:33:27.299: INFO: kube-apiserver-capz-conf-zs64h3-control-plane-dlccj started at 2023-01-22 23:16:53 +0000 UTC (0+1 container statuses recorded) Jan 23 01:33:27.299: INFO: Container kube-apiserver ready: true, restart count 0 Jan 23 01:33:27.487: INFO: Latency metrics for node capz-conf-zs64h3-control-plane-dlccj Jan 23 01:33:27.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-pred-2277" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[91m�[1m• Failure [65.776 seconds]�[0m [sig-scheduling] SchedulerPredicates [Serial] �[90mtest/e2e/scheduling/framework.go:40�[0m �[91m�[1mvalidates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] [It]�[0m �[90mtest/e2e/framework/framework.go:652�[0m �[91mJan 23 01:33:22.357: Unexpected error: <*errors.errorString | 0xc00021c1e0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m test/e2e/scheduling/predicates.go:883 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/scheduling.runPausePodWithTimeout(0xc000a2f080, {{0x718eba2, 0xd}, {0x0, 0x0}, 0x0, 0x0, 0x0, 0x0, 0x0, ...}, ...) test/e2e/scheduling/predicates.go:883 +0xcd k8s.io/kubernetes/test/e2e/scheduling.runPausePod(...) test/e2e/scheduling/predicates.go:878 k8s.io/kubernetes/test/e2e/scheduling.runPodAndGetNodeName(0xc000a2f080, {{0x718eba2, 0xd}, {0x0, 0x0}, 0x0, 0x0, 0x0, 0x0, 0x0, ...}) test/e2e/scheduling/predicates.go:894 +0x6c k8s.io/kubernetes/test/e2e/scheduling.GetNodeThatCanRunPod(0xc00404e700?) test/e2e/scheduling/predicates.go:966 +0x85 k8s.io/kubernetes/test/e2e/scheduling.glob..func4.13() test/e2e/scheduling/predicates.go:700 +0x66 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000503040, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[90m------------------------------�[0m {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":61,"completed":46,"skipped":5380,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that NodeSelector is respected if matching [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:33:27.561: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-pred �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 23 01:33:27.793: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 01:33:27.873: INFO: Waiting for terminating namespaces to be deleted... Jan 23 01:33:27.915: INFO: Logging pods the apiserver thinks is on node capz-conf-2xrmj before test Jan 23 01:33:27.953: INFO: calico-node-windows-v55p5 from calico-system started at 2023-01-22 23:20:32 +0000 UTC (2 container statuses recorded) Jan 23 01:33:27.953: INFO: Container calico-node-felix ready: true, restart count 0 Jan 23 01:33:27.953: INFO: Container calico-node-startup ready: true, restart count 0 Jan 23 01:33:27.953: INFO: containerd-logger-h7zw5 from kube-system started at 2023-01-22 23:20:32 +0000 UTC (1 container statuses recorded) Jan 23 01:33:27.953: INFO: Container containerd-logger ready: true, restart count 0 Jan 23 01:33:27.953: INFO: csi-azuredisk-node-win-b7hkf from kube-system started at 2023-01-22 23:21:02 +0000 UTC (3 container statuses recorded) Jan 23 01:33:27.953: INFO: Container azuredisk ready: true, restart count 0 Jan 23 01:33:27.953: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 01:33:27.953: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 01:33:27.953: INFO: csi-proxy-x5wwz from kube-system started at 2023-01-22 23:21:02 +0000 UTC (1 container statuses recorded) Jan 23 01:33:27.953: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 01:33:27.953: INFO: kube-proxy-windows-bms6h from kube-system started at 2023-01-22 23:20:32 +0000 UTC (1 container statuses recorded) Jan 23 01:33:27.953: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 01:33:27.953: INFO: without-label from sched-pred-2277 started at 2023-01-23 01:32:22 +0000 UTC (1 container statuses recorded) Jan 23 01:33:27.953: INFO: Container without-label ready: true, restart count 0 Jan 23 01:33:27.953: INFO: Logging pods the apiserver thinks is on node capz-conf-96jhk before test Jan 23 01:33:27.992: INFO: calico-node-windows-b54b2 from calico-system started at 2023-01-22 23:19:35 +0000 UTC (2 container statuses recorded) Jan 23 01:33:27.992: INFO: Container calico-node-felix ready: true, restart count 0 Jan 23 01:33:27.992: INFO: Container calico-node-startup ready: true, restart count 0 Jan 23 01:33:27.992: INFO: containerd-logger-k8bhm from kube-system started at 2023-01-22 23:19:35 +0000 UTC (1 container statuses recorded) Jan 23 01:33:27.992: INFO: Container containerd-logger ready: true, restart count 0 Jan 23 01:33:27.992: INFO: csi-azuredisk-node-win-vhcrv from kube-system started at 2023-01-23 01:21:23 +0000 UTC (3 container statuses recorded) Jan 23 01:33:27.992: INFO: Container azuredisk ready: true, restart count 0 Jan 23 01:33:27.992: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 01:33:27.992: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 01:33:27.992: INFO: csi-proxy-llbbf from kube-system started at 2023-01-23 01:21:23 +0000 UTC (1 container statuses recorded) Jan 23 01:33:27.992: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 01:33:27.992: INFO: kube-proxy-windows-mrr95 from kube-system started at 2023-01-22 23:19:35 +0000 UTC (1 container statuses recorded) Jan 23 01:33:27.992: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. �[1mSTEP�[0m: Trying to apply a random label on the found node. �[1mSTEP�[0m: verifying the node has the label kubernetes.io/e2e-022f7408-c74d-4279-9484-0a2aa4b844d1 42 �[1mSTEP�[0m: Trying to relaunch the pod, now with labels. �[1mSTEP�[0m: removing the label kubernetes.io/e2e-022f7408-c74d-4279-9484-0a2aa4b844d1 off the node capz-conf-96jhk �[1mSTEP�[0m: verifying the node doesn't have the label kubernetes.io/e2e-022f7408-c74d-4279-9484-0a2aa4b844d1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 23 01:33:46.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-pred-568" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":61,"completed":47,"skipped":5460,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[90mPodTopologySpread Preemption�[0m �[1mvalidates proper pods are preempted�[0m �[37mtest/e2e/scheduling/preemption.go:355�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:33:46.607: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 23 01:33:46.944: INFO: Waiting up to 1m0s for all nodes to be ready Jan 23 01:34:47.201: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 �[1mSTEP�[0m: Trying to get 2 available nodes which can run pod �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. �[1mSTEP�[0m: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. �[1mSTEP�[0m: Apply 10 fake resource to node capz-conf-96jhk. �[1mSTEP�[0m: Apply 10 fake resource to node capz-conf-2xrmj. [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 �[1mSTEP�[0m: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. �[1mSTEP�[0m: Create 1 Medium Pod with TopologySpreadConstraints �[1mSTEP�[0m: Verify there are 3 Pods left in this namespace �[1mSTEP�[0m: Pod "high" is as expected to be running. �[1mSTEP�[0m: Pod "low-1" is as expected to be running. �[1mSTEP�[0m: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 �[1mSTEP�[0m: removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-96jhk �[1mSTEP�[0m: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[1mSTEP�[0m: removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-2xrmj �[1mSTEP�[0m: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 23 01:35:34.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-602" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":61,"completed":48,"skipped":5485,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould support orphan deletion of custom resources�[0m �[37mtest/e2e/apimachinery/garbage_collector.go:1040�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:35:35.134: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support orphan deletion of custom resources test/e2e/apimachinery/garbage_collector.go:1040 Jan 23 01:35:35.365: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 23 01:35:37.600: INFO: created owner resource "ownersbp8d" Jan 23 01:35:37.635: INFO: created dependent resource "dependent7q7ql" �[1mSTEP�[0m: wait for the owner to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:36:37.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-6911" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":61,"completed":49,"skipped":5587,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan pods created by rc if deleteOptions.OrphanDependents is nil�[0m �[37mtest/e2e/apimachinery/garbage_collector.go:439�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:36:38.039: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan pods created by rc if deleteOptions.OrphanDependents is nil test/e2e/apimachinery/garbage_collector.go:439 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Jan 23 01:37:13.560: INFO: The status of Pod kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj is Running (Ready = true) Jan 23 01:37:13.903: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 23 01:37:13.903: INFO: Deleting pod "simpletest.rc-fqltg" in namespace "gc-4646" Jan 23 01:37:13.943: INFO: Deleting pod "simpletest.rc-v49pq" in namespace "gc-4646" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:37:13.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-4646" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":61,"completed":50,"skipped":5625,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] NoExecuteTaintManager Single Pod [Serial]�[0m �[1mremoving taint cancels eviction [Disruptive] [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:37:14.062: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename taint-single-pod �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/node/taints.go:166 Jan 23 01:37:14.292: INFO: Waiting up to 1m0s for all nodes to be ready Jan 23 01:38:14.520: INFO: Waiting for terminating namespaces to be deleted... [It] removing taint cancels eviction [Disruptive] [Conformance] test/e2e/framework/framework.go:652 Jan 23 01:38:14.554: INFO: Starting informer... �[1mSTEP�[0m: Starting pod... Jan 23 01:38:14.625: INFO: Pod is running on capz-conf-2xrmj. Tainting Node �[1mSTEP�[0m: Trying to apply a taint on the Node �[1mSTEP�[0m: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[1mSTEP�[0m: Waiting short time to make sure Pod is queued for deletion Jan 23 01:38:14.739: INFO: Pod wasn't evicted. Proceeding Jan 23 01:38:14.739: INFO: Removing taint from Node �[1mSTEP�[0m: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[1mSTEP�[0m: Waiting some time to make sure that toleration time passed. Jan 23 01:39:29.849: INFO: Pod wasn't evicted. Test successful [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] test/e2e/framework/framework.go:188 Jan 23 01:39:29.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "taint-single-pod-1775" for this suite. �[32m• [SLOW TEST:135.859 seconds]�[0m [sig-node] NoExecuteTaintManager Single Pod [Serial] �[90mtest/e2e/node/framework.go:23�[0m removing taint cancels eviction [Disruptive] [Conformance] �[90mtest/e2e/framework/framework.go:652�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":61,"completed":51,"skipped":5804,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that NodeSelector is respected if not matching [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:39:29.926: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-pred �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 23 01:39:30.157: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 01:39:30.228: INFO: Waiting for terminating namespaces to be deleted... Jan 23 01:39:30.261: INFO: Logging pods the apiserver thinks is on node capz-conf-2xrmj before test Jan 23 01:39:30.300: INFO: calico-node-windows-v55p5 from calico-system started at 2023-01-22 23:20:32 +0000 UTC (2 container statuses recorded) Jan 23 01:39:30.300: INFO: Container calico-node-felix ready: true, restart count 0 Jan 23 01:39:30.300: INFO: Container calico-node-startup ready: true, restart count 0 Jan 23 01:39:30.300: INFO: containerd-logger-h7zw5 from kube-system started at 2023-01-22 23:20:32 +0000 UTC (1 container statuses recorded) Jan 23 01:39:30.300: INFO: Container containerd-logger ready: true, restart count 0 Jan 23 01:39:30.301: INFO: csi-azuredisk-node-win-54x8v from kube-system started at 2023-01-23 01:38:15 +0000 UTC (3 container statuses recorded) Jan 23 01:39:30.301: INFO: Container azuredisk ready: true, restart count 0 Jan 23 01:39:30.301: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 01:39:30.301: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 01:39:30.301: INFO: csi-proxy-khj48 from kube-system started at 2023-01-23 01:38:15 +0000 UTC (1 container statuses recorded) Jan 23 01:39:30.301: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 01:39:30.301: INFO: kube-proxy-windows-bms6h from kube-system started at 2023-01-22 23:20:32 +0000 UTC (1 container statuses recorded) Jan 23 01:39:30.301: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 01:39:30.301: INFO: taint-eviction-4 from taint-single-pod-1775 started at 2023-01-23 01:38:14 +0000 UTC (1 container statuses recorded) Jan 23 01:39:30.301: INFO: Container pause ready: true, restart count 0 Jan 23 01:39:30.301: INFO: Logging pods the apiserver thinks is on node capz-conf-96jhk before test Jan 23 01:39:30.341: INFO: calico-node-windows-b54b2 from calico-system started at 2023-01-22 23:19:35 +0000 UTC (2 container statuses recorded) Jan 23 01:39:30.341: INFO: Container calico-node-felix ready: true, restart count 0 Jan 23 01:39:30.341: INFO: Container calico-node-startup ready: true, restart count 0 Jan 23 01:39:30.342: INFO: containerd-logger-k8bhm from kube-system started at 2023-01-22 23:19:35 +0000 UTC (1 container statuses recorded) Jan 23 01:39:30.342: INFO: Container containerd-logger ready: true, restart count 0 Jan 23 01:39:30.342: INFO: csi-azuredisk-node-win-vhcrv from kube-system started at 2023-01-23 01:21:23 +0000 UTC (3 container statuses recorded) Jan 23 01:39:30.343: INFO: Container azuredisk ready: true, restart count 0 Jan 23 01:39:30.343: INFO: Container liveness-probe ready: true, restart count 0 Jan 23 01:39:30.343: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 23 01:39:30.343: INFO: csi-proxy-llbbf from kube-system started at 2023-01-23 01:21:23 +0000 UTC (1 container statuses recorded) Jan 23 01:39:30.343: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 01:39:30.343: INFO: kube-proxy-windows-mrr95 from kube-system started at 2023-01-22 23:19:35 +0000 UTC (1 container statuses recorded) Jan 23 01:39:30.343: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Trying to schedule Pod with nonempty NodeSelector. �[1mSTEP�[0m: Considering event: Type = [Warning], Name = [restricted-pod.173ccc766c2ec5d0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 23 01:39:31.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-pred-3449" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":61,"completed":52,"skipped":5947,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould verify changes to a daemon set status [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:39:31.596: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should verify changes to a daemon set status [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating simple DaemonSet "daemon-set" �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 23 01:39:32.086: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 23 01:39:32.119: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 23 01:39:32.119: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 23 01:39:33.155: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 23 01:39:33.188: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 23 01:39:33.188: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 23 01:39:34.155: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 23 01:39:34.190: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 23 01:39:34.190: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 23 01:39:35.156: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 23 01:39:35.189: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 23 01:39:35.189: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 23 01:39:36.156: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 23 01:39:36.189: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 23 01:39:36.189: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 23 01:39:37.155: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 23 01:39:37.189: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 23 01:39:37.189: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP�[0m: Getting /status Jan 23 01:39:37.254: INFO: Daemon Set daemon-set has Conditions: [] �[1mSTEP�[0m: updating the DaemonSet Status Jan 23 01:39:37.324: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the daemon set status to be updated Jan 23 01:39:37.357: INFO: Observed &DaemonSet event: ADDED Jan 23 01:39:37.357: INFO: Observed &DaemonSet event: MODIFIED Jan 23 01:39:37.357: INFO: Observed &DaemonSet event: MODIFIED Jan 23 01:39:37.358: INFO: Observed &DaemonSet event: MODIFIED Jan 23 01:39:37.358: INFO: Found daemon set daemon-set in namespace daemonsets-8567 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Jan 23 01:39:37.358: INFO: Daemon set daemon-set has an updated status �[1mSTEP�[0m: patching the DaemonSet Status �[1mSTEP�[0m: watching for the daemon set status to be patched Jan 23 01:39:37.429: INFO: Observed &DaemonSet event: ADDED Jan 23 01:39:37.429: INFO: Observed &DaemonSet event: MODIFIED Jan 23 01:39:37.429: INFO: Observed &DaemonSet event: MODIFIED Jan 23 01:39:37.429: INFO: Observed &DaemonSet event: MODIFIED Jan 23 01:39:37.430: INFO: Observed daemon set daemon-set in namespace daemonsets-8567 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Jan 23 01:39:37.430: INFO: Observed &DaemonSet event: MODIFIED Jan 23 01:39:37.430: INFO: Found daemon set daemon-set in namespace daemonsets-8567 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Jan 23 01:39:37.431: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8567, will wait for the garbage collector to delete the pods Jan 23 01:39:37.583: INFO: Deleting DaemonSet.extensions daemon-set took: 35.186707ms Jan 23 01:39:37.683: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.286431ms Jan 23 01:39:42.621: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 23 01:39:42.621: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Jan 23 01:39:42.654: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"36633"},"items":null} Jan 23 01:39:42.686: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"36634"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 23 01:39:42.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-8567" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":61,"completed":53,"skipped":6016,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates basic preemption works [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:39:42.868: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Jan 23 01:39:43.208: INFO: Waiting up to 1m0s for all nodes to be ready Jan 23 01:40:43.465: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Create pods that use 4/5 of node resources. Jan 23 01:40:43.583: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 23 01:40:43.621: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 23 01:40:43.706: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 23 01:40:43.744: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP�[0m: Wait for pods to be scheduled. �[1mSTEP�[0m: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 23 01:41:06.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-3580" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":61,"completed":54,"skipped":6168,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete pods created by rc when not orphaning [Conformance]�[0m �[37mtest/e2e/framework/framework.go:652�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:41:06.435: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for all pods to be garbage collected �[1mSTEP�[0m: Gathering metrics Jan 23 01:41:17.135: INFO: The status of Pod kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj is Running (Ready = true) Jan 23 01:41:17.486: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:41:17.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-854" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":61,"completed":55,"skipped":6207,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] Deployment�[0m �[1mShould scale from 5 pods to 3 pods and from 3 to 1�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:43�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:41:17.559: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 5 pods to 3 pods and from 3 to 1 test/e2e/autoscaling/horizontal_pod_autoscaling.go:43 �[1mSTEP�[0m: Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 5 replicas �[1mSTEP�[0m: creating deployment test-deployment in namespace horizontal-pod-autoscaling-6374 I0123 01:41:17.873260 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-6374, replica count: 5 �[1mSTEP�[0m: Running controller I0123 01:41:27.924427 14 runners.go:193] test-deployment Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-6374 I0123 01:41:28.008556 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-6374, replica count: 1 I0123 01:41:38.060041 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 01:41:43.060: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Jan 23 01:41:43.093: INFO: RC test-deployment: consume 325 millicores in total Jan 23 01:41:43.093: INFO: RC test-deployment: setting consumption to 325 millicores in total Jan 23 01:41:43.093: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:41:43.093: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:41:43.093: INFO: RC test-deployment: consume 0 MB in total Jan 23 01:41:43.093: INFO: RC test-deployment: setting consumption to 0 MB in total Jan 23 01:41:43.093: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:41:43.094: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:41:43.094: INFO: RC test-deployment: consume custom metric 0 in total Jan 23 01:41:43.094: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:41:43.094: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:41:43.162: INFO: RC test-deployment: setting bump of metric QPS to 0 in total Jan 23 01:41:43.231: INFO: waiting for 3 replicas (current: 5) Jan 23 01:42:03.265: INFO: waiting for 3 replicas (current: 5) Jan 23 01:42:13.162: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:42:13.162: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:42:13.162: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:42:13.162: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:42:13.162: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:42:13.162: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:42:23.266: INFO: waiting for 3 replicas (current: 5) Jan 23 01:42:43.198: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:42:43.198: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:42:43.199: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:42:43.198: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:42:43.207: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:42:43.207: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:42:43.266: INFO: waiting for 3 replicas (current: 5) Jan 23 01:43:03.267: INFO: waiting for 3 replicas (current: 5) Jan 23 01:43:13.234: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:43:13.234: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:43:13.234: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:43:13.234: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:43:13.248: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:43:13.248: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:43:23.266: INFO: waiting for 3 replicas (current: 5) Jan 23 01:43:43.266: INFO: waiting for 3 replicas (current: 5) Jan 23 01:43:43.270: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:43:43.270: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:43:43.270: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:43:43.270: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:43:43.290: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:43:43.290: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:44:03.267: INFO: waiting for 3 replicas (current: 5) Jan 23 01:44:13.305: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:44:13.305: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:44:13.305: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:44:13.305: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:44:13.335: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:44:13.335: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:44:23.266: INFO: waiting for 3 replicas (current: 5) Jan 23 01:44:43.268: INFO: waiting for 3 replicas (current: 5) Jan 23 01:44:43.343: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:44:43.343: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:44:43.343: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:44:43.343: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:44:43.377: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:44:43.378: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:45:03.266: INFO: waiting for 3 replicas (current: 5) Jan 23 01:45:13.382: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:45:13.382: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:45:13.382: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:45:13.382: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:45:13.419: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:45:13.419: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:45:23.267: INFO: waiting for 3 replicas (current: 5) Jan 23 01:45:43.268: INFO: waiting for 3 replicas (current: 5) Jan 23 01:45:43.421: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:45:43.421: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:45:43.421: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:45:43.422: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:45:43.461: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:45:43.461: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:46:03.265: INFO: waiting for 3 replicas (current: 5) Jan 23 01:46:13.457: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:46:13.457: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:46:13.457: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:46:13.457: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:46:13.504: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:46:13.504: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:46:23.265: INFO: waiting for 3 replicas (current: 5) Jan 23 01:46:43.269: INFO: waiting for 3 replicas (current: 5) Jan 23 01:46:43.493: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:46:43.494: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:46:43.493: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:46:43.494: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:46:43.544: INFO: RC test-deployment: sending request to consume 325 millicores Jan 23 01:46:43.544: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Jan 23 01:47:03.266: INFO: waiting for 3 replicas (current: 3) Jan 23 01:47:03.266: INFO: RC test-deployment: consume 10 millicores in total Jan 23 01:47:03.266: INFO: RC test-deployment: setting consumption to 10 millicores in total Jan 23 01:47:03.299: INFO: waiting for 1 replicas (current: 3) Jan 23 01:47:13.531: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:47:13.531: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:47:13.531: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:47:13.531: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:47:13.586: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:47:13.586: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:47:23.334: INFO: waiting for 1 replicas (current: 3) Jan 23 01:47:43.337: INFO: waiting for 1 replicas (current: 3) Jan 23 01:47:43.567: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:47:43.567: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:47:43.567: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:47:43.567: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:47:43.626: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:47:43.626: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:48:03.338: INFO: waiting for 1 replicas (current: 3) Jan 23 01:48:13.605: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:48:13.605: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:48:13.605: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:48:13.605: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:48:13.666: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:48:13.667: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:48:23.336: INFO: waiting for 1 replicas (current: 3) Jan 23 01:48:43.341: INFO: waiting for 1 replicas (current: 3) Jan 23 01:48:43.641: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:48:43.642: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:48:43.641: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:48:43.642: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:48:43.707: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:48:43.707: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:49:03.338: INFO: waiting for 1 replicas (current: 3) Jan 23 01:49:13.676: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:49:13.676: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:49:13.676: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:49:13.677: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:49:13.748: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:49:13.748: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:49:23.336: INFO: waiting for 1 replicas (current: 3) Jan 23 01:49:43.336: INFO: waiting for 1 replicas (current: 3) Jan 23 01:49:43.712: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:49:43.712: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:49:43.712: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:49:43.712: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:49:43.787: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:49:43.787: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:50:03.333: INFO: waiting for 1 replicas (current: 3) Jan 23 01:50:13.750: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:50:13.750: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:50:13.750: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:50:13.751: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:50:13.827: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:50:13.827: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:50:23.339: INFO: waiting for 1 replicas (current: 3) Jan 23 01:50:43.334: INFO: waiting for 1 replicas (current: 3) Jan 23 01:50:43.787: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:50:43.787: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:50:43.787: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:50:43.787: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:50:43.869: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:50:43.869: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:51:03.333: INFO: waiting for 1 replicas (current: 3) Jan 23 01:51:13.824: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:51:13.824: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:51:13.824: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:51:13.824: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:51:13.908: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:51:13.908: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:51:23.335: INFO: waiting for 1 replicas (current: 3) Jan 23 01:51:43.334: INFO: waiting for 1 replicas (current: 3) Jan 23 01:51:43.860: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:51:43.860: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:51:43.865: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:51:43.865: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:51:43.949: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:51:43.949: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:52:03.335: INFO: waiting for 1 replicas (current: 2) Jan 23 01:52:13.897: INFO: RC test-deployment: sending request to consume 0 of custom metric QPS Jan 23 01:52:13.897: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:52:13.900: INFO: RC test-deployment: sending request to consume 0 MB Jan 23 01:52:13.900: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:52:13.990: INFO: RC test-deployment: sending request to consume 10 millicores Jan 23 01:52:13.990: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6374/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Jan 23 01:52:23.339: INFO: waiting for 1 replicas (current: 1) �[1mSTEP�[0m: Removing consuming RC test-deployment Jan 23 01:52:23.376: INFO: RC test-deployment: stopping metric consumer Jan 23 01:52:23.376: INFO: RC test-deployment: stopping CPU consumer Jan 23 01:52:23.376: INFO: RC test-deployment: stopping mem consumer �[1mSTEP�[0m: deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-6374, will wait for the garbage collector to delete the pods Jan 23 01:52:33.500: INFO: Deleting Deployment.apps test-deployment took: 36.571264ms Jan 23 01:52:33.601: INFO: Terminating Deployment.apps test-deployment pods took: 100.808227ms �[1mSTEP�[0m: deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-6374, will wait for the garbage collector to delete the pods Jan 23 01:52:36.085: INFO: Deleting ReplicationController test-deployment-ctrl took: 37.179341ms Jan 23 01:52:36.185: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.327885ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 23 01:52:37.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-6374" for this suite. �[32m• [SLOW TEST:680.462 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:38�[0m Should scale from 5 pods to 3 pods and from 3 to 1 �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:43�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","total":61,"completed":56,"skipped":6286,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow]�[0m �[90mattempt to deploy past allocatable memory limits�[0m �[1mshould fail deployments of pods once there isn't enough memory�[0m �[37mtest/e2e/windows/memory_limits.go:60�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:52:38.031: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename memory-limit-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Jan 23 01:52:38.440: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 23 01:52:38.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "memory-limit-test-windows-4139" for this suite. �[32m•�[0m{"msg":"PASSED [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits should fail deployments of pods once there isn't enough memory","total":61,"completed":57,"skipped":6682,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)�[0m �[90m[Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case)�[0m �[1mShould not scale up on a busy sidecar with an idle application�[0m �[37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:103�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 23 01:52:38.515: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should not scale up on a busy sidecar with an idle application test/e2e/autoscaling/horizontal_pod_autoscaling.go:103 �[1mSTEP�[0m: Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-3236 �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-3236 I0123 01:52:38.827141 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-3236, replica count: 1 I0123 01:52:48.880311 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:52:58.881511 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:53:08.882738 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:53:18.884442 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:53:28.887442 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:53:38.888432 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:53:48.888810 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:53:58.891564 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:54:08.892766 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:54:18.893491 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:54:28.896438 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:54:38.896985 14 runners.go:193] rs Pods: 1 out of 1 created, 0 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 01:54:48.897709 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: Running controller �[1mSTEP�[0m: creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-3236 I0123 01:54:48.982640 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-3236, replica count: 1 I0123 01:54:59.034429 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 01:55:04.036: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 �[1mSTEP�[0m: Running consuming RC sidecar rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[1mSTEP�[0m: Running controller for sidecar �[1mSTEP�[0m: creating replication controller rs-sidecar-ctrl in namespace horizontal-pod-autoscaling-3236 I0123 01:55:04.227092 14 runners.go:193] Created replication controller with name: rs-sidecar-ctrl, namespace: horizontal-pod-autoscaling-3236, replica count: 1 I0123 01:55:14.278296 14 runners.go:193] rs-sidecar-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 01:55:19.281: INFO: Waiting for amount of service:rs-sidecar-ctrl endpoints to be 1 Jan 23 01:55:19.314: INFO: RC rs: consume 250 millicores in total Jan 23 01:55:19.314: INFO: RC rs: setting consumption to 250 millicores in total Jan 23 01:55:19.314: INFO: RC rs: sending request to consume 250 millicores Jan 23 01:55:19.314: INFO: RC rs: consume 0 MB in total Jan 23 01:55:19.314: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3236/services/rs-sidecar-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 01:55:19.314: INFO: RC rs: sending request to consume 0 MB Jan 23 01:55:19.314: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3236/services/rs-sidecar-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:55:19.380: INFO: RC rs: setting consumption to 0 MB in total Jan 23 01:55:19.380: INFO: RC rs: consume custom metric 0 in total Jan 23 01:55:19.380: INFO: RC rs: setting bump of metric QPS to 0 in total Jan 23 01:55:19.380: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 23 01:55:19.380: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3236/services/rs-sidecar-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:55:19.449: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 23 01:55:19.482: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 23 01:55:29.518: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 23 01:55:29.551: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Jan 23 01:55:39.515: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 23 01:55:39.547: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 23 01:55:49.380: INFO: RC rs: sending request to consume 0 MB Jan 23 01:55:49.380: INFO: RC rs: sending request to consume 250 millicores Jan 23 01:55:49.380: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3236/services/rs-sidecar-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:55:49.380: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3236/services/rs-sidecar-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 01:55:49.416: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 23 01:55:49.416: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3236/services/rs-sidecar-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:55:49.516: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 23 01:55:49.549: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 23 01:55:59.517: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 23 01:55:59.550: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 23 01:56:09.516: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 23 01:56:09.549: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 23 01:56:19.416: INFO: RC rs: sending request to consume 0 MB Jan 23 01:56:19.416: INFO: ConsumeMem URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3236/services/rs-sidecar-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 23 01:56:19.423: INFO: RC rs: sending request to consume 250 millicores Jan 23 01:56:19.423: INFO: ConsumeCPU URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3236/services/rs-sidecar-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 23 01:56:19.452: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 23 01:56:19.452: INFO: ConsumeCustomMetric URL: {https capz-conf-zs64h3-3062464f.canadacentral.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3236/services/rs-sidecar-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 23 01:56:19.515: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 23 01:56:19.548: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 23 01:56:19.588: INFO: expecting there to be in [1, 1] replicas (are: 1) Jan 23 01:56:19.621: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:<nil>} Jan 23 01:56:19.621: INFO: Number of replicas was stable over 1m0s �[1mSTEP�[0m: Removing consuming RC rs Jan 23 01:56:19.660: INFO: RC rs: stopping metric consumer Jan 23 01:56:19.660: INFO: RC rs: stopping mem consumer Jan 23 01:56:19.660: INFO: RC rs: stopping CPU consumer �[1mSTEP�[0m: deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-3236, will wait for the garbage collector to delete the pods Jan 23 01:56:29.839: INFO: Deleting ReplicaSet.apps rs took: 41.970897ms Jan 23 01:56:29.940: INFO: Terminating ReplicaSet.apps rs pods took: 101.147417ms �[1mSTEP�[0m: deleting ReplicationController rs-sidecar-ctrl in namespace horizontal-pod-autoscaling-3236, will wait for the garbage collector to delete the pods Jan 23 01:56:31.713: INFO: Deleting ReplicationController rs-sidecar-ctrl took: 36.347022ms Jan 23 01:56:31.814: INFO: Terminating ReplicationController rs-sidecar-ctrl pods took: 101.254531ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 23 01:56:33.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-3236" for this suite. �[32m• [SLOW TEST:234.972 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[90mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:96�[0m Should not scale up on a busy sidecar with an idle application �[90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:103�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application","total":61,"completed":58,"skipped":6826,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0mJan 23 01:56:33.489: INFO: Running AfterSuite actions on all nodes Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 Jan 23 01:56:33.489: INFO: Running AfterSuite actions on node 1 Jan 23 01:56:33.489: INFO: Skipping dumping logs from cluster JUnit report was created: /output/junit_kubetest.01.xml {"msg":"Test Suite completed","total":61,"completed":58,"skipped":6914,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} �[91m�[1mSummarizing 1 Failure:�[0m �[91m�[1m[Fail] �[0m�[90m[sig-scheduling] SchedulerPredicates [Serial] �[0m�[91m�[1m[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] �[0m �[37mtest/e2e/scheduling/predicates.go:883�[0m �[1m�[91mRan 59 of 6973 Specs in 9269.842 seconds�[0m �[1m�[91mFAIL!�[0m -- �[32m�[1m58 Passed�[0m | �[91m�[1m1 Failed�[0m | �[33m�[1m0 Pending�[0m | �[36m�[1m6914 Skipped�[0m --- FAIL: TestE2E (9272.52s) FAIL Ginkgo ran 1 suite in 2h34m32.687547334s Test Suite Failed [FAILED] Unexpected error: <*errors.withStack | 0xc0009ac150>: { error: <*errors.withMessage | 0xc000956820>{ cause: <*errors.errorString | 0xc0004f4b60>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x3143379, 0x353bac7, 0x18e62fb, 0x18f9df8, 0x147c741], } Unable to run conformance tests: error container run failed with exit code 1 occurred In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/23/23 01:56:33.91 < Exit [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/23/23 01:56:33.91 (2h44m19.457s) > Enter [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:242 @ 01/23/23 01:56:33.91 Jan 23 01:56:33.912: INFO: FAILED! Jan 23 01:56:33.915: INFO: Cleaning up after "Conformance Tests conformance-tests" spec STEP: Dumping logs from the "capz-conf-zs64h3" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/23/23 01:56:33.915 Jan 23 01:56:33.915: INFO: Dumping workload cluster capz-conf-zs64h3/capz-conf-zs64h3 logs Jan 23 01:56:34.027: INFO: Collecting logs for Linux node capz-conf-zs64h3-control-plane-dlccj in cluster capz-conf-zs64h3 in namespace capz-conf-zs64h3 Jan 23 01:57:12.118: INFO: Collecting boot logs for AzureMachine capz-conf-zs64h3-control-plane-dlccj Jan 23 01:57:13.070: INFO: Collecting logs for Windows node capz-conf-96jhk in cluster capz-conf-zs64h3 in namespace capz-conf-zs64h3 Jan 23 01:59:13.743: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-96jhk to /logs/artifacts/clusters/capz-conf-zs64h3/machines/capz-conf-zs64h3-md-win-67dfd985d8-q88x8/crashdumps.tar Jan 23 01:59:15.369: INFO: Collecting boot logs for AzureMachine capz-conf-zs64h3-md-win-96jhk Jan 23 01:59:16.404: INFO: Collecting logs for Windows node capz-conf-2xrmj in cluster capz-conf-zs64h3 in namespace capz-conf-zs64h3 Jan 23 02:01:16.071: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-2xrmj to /logs/artifacts/clusters/capz-conf-zs64h3/machines/capz-conf-zs64h3-md-win-67dfd985d8-q945m/crashdumps.tar Jan 23 02:01:17.701: INFO: Collecting boot logs for AzureMachine capz-conf-zs64h3-md-win-2xrmj Jan 23 02:01:18.900: INFO: Dumping workload cluster capz-conf-zs64h3/capz-conf-zs64h3 kube-system pod logs Jan 23 02:01:19.258: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-7f7758c56-4445b, container calico-apiserver Jan 23 02:01:19.258: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-7f7758c56-gzr5r, container calico-apiserver Jan 23 02:01:19.258: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-7f7758c56-4445b Jan 23 02:01:19.258: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-7f7758c56-gzr5r Jan 23 02:01:19.293: INFO: Collecting events for Pod calico-system/calico-kube-controllers-594d54f99-r76g2 Jan 23 02:01:19.294: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-r76g2, container calico-kube-controllers Jan 23 02:01:19.294: INFO: Creating log watcher for controller calico-system/calico-node-windows-v55p5, container calico-node-startup Jan 23 02:01:19.294: INFO: Creating log watcher for controller calico-system/calico-node-windows-b54b2, container calico-node-startup Jan 23 02:01:19.294: INFO: Creating log watcher for controller calico-system/calico-node-4wzwq, container calico-node Jan 23 02:01:19.294: INFO: Creating log watcher for controller calico-system/calico-node-windows-v55p5, container calico-node-felix Jan 23 02:01:19.295: INFO: Collecting events for Pod calico-system/calico-typha-646b464966-bv45m Jan 23 02:01:19.295: INFO: Creating log watcher for controller calico-system/csi-node-driver-zxc7j, container csi-node-driver-registrar Jan 23 02:01:19.295: INFO: Collecting events for Pod calico-system/calico-node-windows-v55p5 Jan 23 02:01:19.295: INFO: Collecting events for Pod calico-system/calico-node-4wzwq Jan 23 02:01:19.295: INFO: Creating log watcher for controller calico-system/calico-typha-646b464966-bv45m, container calico-typha Jan 23 02:01:19.295: INFO: Collecting events for Pod calico-system/csi-node-driver-zxc7j Jan 23 02:01:19.295: INFO: Creating log watcher for controller calico-system/calico-node-windows-b54b2, container calico-node-felix Jan 23 02:01:19.295: INFO: Collecting events for Pod calico-system/calico-node-windows-b54b2 Jan 23 02:01:19.295: INFO: Creating log watcher for controller calico-system/csi-node-driver-zxc7j, container calico-csi Jan 23 02:01:19.381: INFO: Creating log watcher for controller kube-system/containerd-logger-h7zw5, container containerd-logger Jan 23 02:01:19.381: INFO: Collecting events for Pod kube-system/containerd-logger-h7zw5 Jan 23 02:01:19.381: INFO: Creating log watcher for controller kube-system/containerd-logger-k8bhm, container containerd-logger Jan 23 02:01:19.381: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vhcrv, container node-driver-registrar Jan 23 02:01:19.381: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-v86xv, container azuredisk Jan 23 02:01:19.381: INFO: Collecting events for Pod kube-system/containerd-logger-k8bhm Jan 23 02:01:19.382: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj, container kube-controller-manager Jan 23 02:01:19.382: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-zs64h3-control-plane-dlccj Jan 23 02:01:19.382: INFO: Creating log watcher for controller kube-system/kube-proxy-76knr, container kube-proxy Jan 23 02:01:19.382: INFO: Collecting events for Pod kube-system/kube-proxy-windows-mrr95 Jan 23 02:01:19.382: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sbg2s, container node-driver-registrar Jan 23 02:01:19.382: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-v86xv Jan 23 02:01:19.382: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sbg2s, container liveness-probe Jan 23 02:01:19.383: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vhcrv, container azuredisk Jan 23 02:01:19.383: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-zs64h3-control-plane-dlccj, container kube-scheduler Jan 23 02:01:19.384: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sbg2s, container azuredisk Jan 23 02:01:19.384: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-win-vhcrv Jan 23 02:01:19.384: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-conf-zs64h3-control-plane-dlccj Jan 23 02:01:19.384: INFO: Creating log watcher for controller kube-system/csi-proxy-khj48, container csi-proxy Jan 23 02:01:19.384: INFO: Creating log watcher for controller kube-system/metrics-server-7d674f87b8-4bpgn, container metrics-server Jan 23 02:01:19.384: INFO: Collecting events for Pod kube-system/metrics-server-7d674f87b8-4bpgn Jan 23 02:01:19.384: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-54x8v, container node-driver-registrar Jan 23 02:01:19.385: INFO: Collecting events for Pod kube-system/kube-proxy-76knr Jan 23 02:01:19.385: INFO: Creating log watcher for controller kube-system/kube-proxy-windows-bms6h, container kube-proxy Jan 23 02:01:19.385: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-sbg2s Jan 23 02:01:19.385: INFO: Collecting events for Pod kube-system/kube-proxy-windows-bms6h Jan 23 02:01:19.385: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-zs64h3-control-plane-dlccj, container etcd Jan 23 02:01:19.386: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-54x8v, container azuredisk Jan 23 02:01:19.386: INFO: Collecting events for Pod kube-system/etcd-capz-conf-zs64h3-control-plane-dlccj Jan 23 02:01:19.386: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-54x8v, container liveness-probe Jan 23 02:01:19.386: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-conf-zs64h3-control-plane-dlccj Jan 23 02:01:19.386: INFO: Creating log watcher for controller kube-system/kube-proxy-windows-mrr95, container kube-proxy Jan 23 02:01:19.386: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-zs64h3-control-plane-dlccj, container kube-apiserver Jan 23 02:01:19.387: INFO: Collecting events for Pod kube-system/csi-proxy-khj48 Jan 23 02:01:19.387: INFO: Creating log watcher for controller kube-system/csi-proxy-llbbf, container csi-proxy Jan 23 02:01:19.387: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-win-54x8v Jan 23 02:01:19.387: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vhcrv, container liveness-probe Jan 23 02:01:19.388: INFO: Collecting events for Pod kube-system/csi-proxy-llbbf Jan 23 02:01:19.388: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-v86xv, container csi-provisioner Jan 23 02:01:19.388: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-vfrs6 Jan 23 02:01:19.388: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-v86xv, container csi-attacher Jan 23 02:01:19.388: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-vfrs6, container coredns Jan 23 02:01:19.388: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-xkzsq, container coredns Jan 23 02:01:19.389: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-xkzsq Jan 23 02:01:19.389: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-v86xv, container csi-resizer Jan 23 02:01:19.389: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-v86xv, container csi-snapshotter Jan 23 02:01:19.389: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-v86xv, container liveness-probe Jan 23 02:01:19.455: INFO: Fetching kube-system pod logs took 555.014741ms Jan 23 02:01:19.455: INFO: Dumping workload cluster capz-conf-zs64h3/capz-conf-zs64h3 Azure activity log Jan 23 02:01:19.455: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-kmtvm, container tigera-operator Jan 23 02:01:19.456: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-kmtvm Jan 23 02:01:20.800: INFO: Fetching activity logs took 1.34477258s Jan 23 02:01:20.800: INFO: Dumping all the Cluster API resources in the "capz-conf-zs64h3" namespace Jan 23 02:01:21.238: INFO: Deleting all clusters in the capz-conf-zs64h3 namespace STEP: Deleting cluster capz-conf-zs64h3 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/23/23 02:01:21.267 INFO: Waiting for the Cluster capz-conf-zs64h3/capz-conf-zs64h3 to be deleted STEP: Waiting for cluster capz-conf-zs64h3 to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/23/23 02:01:21.282 Jan 23 02:08:11.578: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-zs64h3 Jan 23 02:08:11.598: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/23/23 02:08:12.295 < Exit [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:242 @ 01/23/23 02:08:47.988 (12m14.078s)
Filter through log files | View test history on testgrid
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 483 lines ... [38;5;243m------------------------------[0m [0mConformance Tests [0m[1mconformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100[0m INFO: Cluster name is capz-conf-zs64h3 [1mSTEP:[0m Creating namespace "capz-conf-zs64h3" for hosting the cluster [38;5;243m@ 01/22/23 23:12:14.386[0m Jan 22 23:12:14.386: INFO: starting to create namespace for hosting the "capz-conf-zs64h3" test spec 2023/01/22 23:12:14 failed trying to get namespace (capz-conf-zs64h3):namespaces "capz-conf-zs64h3" not found INFO: Creating namespace capz-conf-zs64h3 INFO: Creating event watcher for namespace "capz-conf-zs64h3" [1mconformance-tests[38;5;243m - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/22/23 23:12:14.453[0m [1mconformance-tests [0m[1mName[0m | [1mN[0m | [1mMin[0m | [1mMedian[0m | [1mMean[0m | [1mStdDev[0m | [1mMax[0m INFO: Creating the workload cluster with name "capz-conf-zs64h3" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.24.11-rc.0.6+7c685ed7305e76, 1 control-plane machines, 0 worker machines) ... skipping 112 lines ... [1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 01/22/23 23:19:03.205[0m [1mSTEP:[0m Checking all the machines controlled by capz-conf-zs64h3-md-win are in the "<None>" failure domain [38;5;243m@ 01/22/23 23:21:53.503[0m INFO: Waiting for the machine pools to be provisioned INFO: Using repo-list '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/kubetest/repo-list.yaml' for version 'v1.24.11-rc.0.6+7c685ed7305e76' [1mSTEP:[0m Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-nodes=1" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--report-prefix=kubetest." "--num-nodes=2" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "-ginkgo.progress=true" "-ginkgo.skip=\\[LinuxOnly\\]|\\[Excluded:WindowsDocker\\]|device.plugin.for.Windows" "-ginkgo.slowSpecThreshold=120" "-node-os-distro=windows" "-dump-logs-on-failure=true" "-ginkgo.focus=(\\[sig-windows\\]|\\[sig-scheduling\\].SchedulerPreemption|\\[sig-autoscaling\\].\\[Feature:HPA\\]|\\[sig-apps\\].CronJob).*(\\[Serial\\]|\\[Slow\\])|(\\[Serial\\]|\\[Slow\\]).*(\\[Conformance\\]|\\[NodeConformance\\])|\\[sig-api-machinery\\].Garbage.collector" "-ginkgo.trace=true" "-ginkgo.v=true" "-prepull-images=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=0"] [38;5;243m@ 01/22/23 23:21:53.811[0m I0122 23:22:01.018730 14 e2e.go:129] Starting e2e run "e9b272d5-52c6-4cae-a53c-abd7836f7454" on Ginkgo node 1 {"msg":"Test Suite starting","total":61,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: [1m1674429720[0m - Will randomize all specs Will run [1m61[0m of [1m6973[0m specs Jan 22 23:22:03.653: INFO: >>> kubeConfig: /tmp/kubeconfig ... skipping 72 lines ... [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 22 23:24:18.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sched-preemption-1329" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [32m•[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":61,"completed":1,"skipped":59,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 35 lines ... For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 22 23:24:20.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-5889" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":61,"completed":2,"skipped":270,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)[0m [90mReplicationController light[0m [1mShould scale from 2 pods to 1 pod [Slow][0m [37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:82[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ... skipping 123 lines ... [90mtest/e2e/autoscaling/framework.go:23[0m ReplicationController light [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:69[0m Should scale from 2 pods to 1 pod [Slow] [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:82[0m [90m------------------------------[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]","total":61,"completed":3,"skipped":305,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould delete RS created by deployment when not orphaning [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 35 lines ... For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 22 23:30:24.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-279" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":61,"completed":4,"skipped":332,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Daemon set [Serial][0m [1mshould run and stop complex daemon [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] Daemon set [Serial] ... skipping 63 lines ... Jan 22 23:30:46.174: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"4175"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:30:46.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "daemonsets-3375" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":61,"completed":5,"skipped":379,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould support cascading deletion of custom resources[0m [37mtest/e2e/apimachinery/garbage_collector.go:905[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 10 lines ... Jan 22 23:30:48.913: INFO: created dependent resource "dependenttnhdb" Jan 22 23:30:48.987: INFO: created canary resource "canaryhr48x" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 22 23:31:04.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-203" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":61,"completed":6,"skipped":467,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)[0m [90m[Serial] [Slow] ReplicaSet[0m [1mShould scale from 5 pods to 3 pods and from 3 to 1[0m [37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:53[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ... skipping 209 lines ... [90mtest/e2e/autoscaling/framework.go:23[0m [Serial] [Slow] ReplicaSet [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:48[0m Should scale from 5 pods to 3 pods and from 3 to 1 [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:53[0m [90m------------------------------[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1","total":61,"completed":7,"skipped":507,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Namespaces [Serial][0m [1mshould ensure that all pods are removed when a namespace is deleted [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] ... skipping 17 lines ... test/e2e/framework/framework.go:188 Jan 22 23:42:39.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "namespaces-7376" for this suite. [1mSTEP[0m: Destroying namespace "nsdeletetest-8918" for this suite. Jan 22 23:42:39.379: INFO: Namespace nsdeletetest-8918 was already deleted [1mSTEP[0m: Destroying namespace "nsdeletetest-6879" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":61,"completed":8,"skipped":737,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow][0m [90mGMSA support[0m [1mworks end to end[0m [37mtest/e2e/windows/gmsa_full.go:97[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 51 lines ... test/e2e/framework/framework.go:188 Jan 22 23:42:46.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "namespaces-6038" for this suite. [1mSTEP[0m: Destroying namespace "nsdeletetest-9123" for this suite. Jan 22 23:42:46.545: INFO: Namespace nsdeletetest-9123 was already deleted [1mSTEP[0m: Destroying namespace "nsdeletetest-9390" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":61,"completed":9,"skipped":912,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-windows] [Feature:Windows] Kubelet-Stats [Serial][0m [90mKubelet stats collection for Windows nodes[0m [0mwhen running 10 pods[0m [1mshould return within 10 seconds[0m [37mtest/e2e/windows/kubelet_stats.go:47[0m [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] ... skipping 201 lines ... [1mSTEP[0m: Getting kubelet stats 5 times and checking average duration Jan 22 23:43:53.815: INFO: Getting kubelet stats for node capz-conf-2xrmj took an average of 332 milliseconds over 5 iterations [AfterEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] test/e2e/framework/framework.go:188 Jan 22 23:43:53.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-stats-test-windows-serial-3795" for this suite. [32m•[0m{"msg":"PASSED [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","total":61,"completed":10,"skipped":1139,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir wrapper volumes[0m [1mshould not cause race condition when used for configmaps [Serial] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes ... skipping 29 lines ... Jan 22 23:45:06.998: INFO: Terminating ReplicationController wrapped-volume-race-780045ff-9e40-4ff6-a60e-d860b887c7b9 pods took: 101.194022ms [1mSTEP[0m: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:188 Jan 22 23:45:12.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-wrapper-8896" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":61,"completed":11,"skipped":1149,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior)[0m [90mwith short downscale stabilization window[0m [1mshould scale down soon after the stabilization period[0m [37mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:34[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) ... skipping 98 lines ... [90mtest/e2e/autoscaling/framework.go:23[0m with short downscale stabilization window [90mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:33[0m should scale down soon after the stabilization period [90mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:34[0m [90m------------------------------[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period","total":61,"completed":12,"skipped":1224,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Daemon set [Serial][0m [1mshould retry creating failed daemon pods [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 22 23:48:53.107: INFO: >>> kubeConfig: /tmp/kubeconfig [1mSTEP[0m: Building a namespace api object, basename daemonsets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should retry creating failed daemon pods [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Creating a simple DaemonSet "daemon-set" [1mSTEP[0m: Check that daemon pods launch on every node of the cluster. Jan 22 23:48:53.561: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:53.594: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:48:53.594: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 ... skipping 9 lines ... Jan 22 23:48:57.633: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:57.669: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 22 23:48:57.669: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:48:58.630: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:58.664: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 22 23:48:58.665: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [1mSTEP[0m: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 22 23:48:58.813: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:58.846: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:48:58.846: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:48:59.884: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:48:59.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:48:59.918: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 ... skipping 6 lines ... Jan 22 23:49:02.885: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:02.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 22 23:49:02.918: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1 Jan 22 23:49:03.884: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 22 23:49:03.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Jan 22 23:49:03.918: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [1mSTEP[0m: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 [1mSTEP[0m: Deleting DaemonSet "daemon-set" [1mSTEP[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1609, will wait for the garbage collector to delete the pods Jan 22 23:49:04.103: INFO: Deleting DaemonSet.extensions daemon-set took: 36.334354ms Jan 22 23:49:04.204: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.652131ms ... skipping 4 lines ... Jan 22 23:49:09.303: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"9131"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:49:09.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "daemonsets-1609" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":61,"completed":13,"skipped":1240,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-windows] [Feature:Windows] Density [Serial] [Slow][0m [90mcreate a batch of pods[0m [1mlatency/resource should be within limit when create 10 pods with 0s interval[0m [37mtest/e2e/windows/density.go:68[0m [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] ... skipping 50 lines ... Jan 22 23:49:49.868: INFO: Pod test-ec417beb-eecb-40d1-b48d-35ba278bd923 no longer exists Jan 22 23:49:49.870: INFO: Pod test-3e55e96b-bdc1-44dd-9954-fb5543c55788 no longer exists [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 22 23:49:49.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "density-test-windows-4549" for this suite. [32m•[0m{"msg":"PASSED [sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","total":61,"completed":14,"skipped":1314,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Daemon set [Serial][0m [1mshould rollback without unnecessary restarts [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] Daemon set [Serial] ... skipping 51 lines ... Jan 22 23:50:15.666: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"9712"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:50:15.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "daemonsets-5489" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":61,"completed":15,"skipped":1328,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Variable Expansion[0m [1mshould verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-node] Variable Expansion ... skipping 2 lines ... Jan 22 23:50:15.845: INFO: >>> kubeConfig: /tmp/kubeconfig [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: creating the pod with failed condition [1mSTEP[0m: updating the pod Jan 22 23:52:16.806: INFO: Successfully updated pod "var-expansion-4dba3e13-e06f-4d89-980f-552c75326e8b" [1mSTEP[0m: waiting for pod running [1mSTEP[0m: deleting the pod gracefully Jan 22 23:52:28.874: INFO: Deleting pod "var-expansion-4dba3e13-e06f-4d89-980f-552c75326e8b" in namespace "var-expansion-7880" Jan 22 23:52:28.916: INFO: Wait up to 5m0s for pod "var-expansion-4dba3e13-e06f-4d89-980f-552c75326e8b" to be fully deleted ... skipping 5 lines ... [32m• [SLOW TEST:139.215 seconds][0m [sig-node] Variable Expansion [90mtest/e2e/common/node/framework.go:23[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":61,"completed":16,"skipped":1618,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Daemon set [Serial][0m [1mshould update pod when spec was updated and update strategy is RollingUpdate [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] Daemon set [Serial] ... skipping 86 lines ... Jan 22 23:53:02.292: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"10383"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:53:02.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "daemonsets-1413" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":61,"completed":17,"skipped":1898,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Variable Expansion[0m [1mshould fail substituting values in a volume subpath with absolute path [Slow] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 22 23:53:02.467: INFO: >>> kubeConfig: /tmp/kubeconfig [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/framework/framework.go:652 Jan 22 23:53:06.805: INFO: Deleting pod "var-expansion-13059b14-b127-49ee-a5f5-8d890851903b" in namespace "var-expansion-355" Jan 22 23:53:06.842: INFO: Wait up to 5m0s for pod "var-expansion-13059b14-b127-49ee-a5f5-8d890851903b" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 22 23:53:10.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "var-expansion-355" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":61,"completed":18,"skipped":2023,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)[0m [90m[Serial] [Slow] Deployment[0m [1mShould scale from 1 pod to 3 pods and from 3 to 5[0m [37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:40[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ... skipping 73 lines ... [90mtest/e2e/autoscaling/framework.go:23[0m [Serial] [Slow] Deployment [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:38[0m Should scale from 1 pod to 3 pods and from 3 to 5 [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:40[0m [90m------------------------------[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5","total":61,"completed":19,"skipped":2117,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould orphan pods created by rc if delete options say so [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 135 lines ... Jan 22 23:56:17.051: INFO: Deleting pod "simpletest.rc-xh4hj" in namespace "gc-7044" Jan 22 23:56:17.100: INFO: Deleting pod "simpletest.rc-z9dw6" in namespace "gc-7044" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 22 23:56:17.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-7044" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":61,"completed":20,"skipped":2218,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Daemon set [Serial][0m [1mshould run and stop simple daemon [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] Daemon set [Serial] ... skipping 293 lines ... Jan 22 23:57:51.583: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"13403"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:57:51.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "daemonsets-1851" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":61,"completed":21,"skipped":2301,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Variable Expansion[0m [1mshould succeed in writing subpaths in container [Slow] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-node] Variable Expansion ... skipping 24 lines ... Jan 22 23:58:07.403: INFO: Deleting pod "var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b" in namespace "var-expansion-4649" Jan 22 23:58:07.443: INFO: Wait up to 5m0s for pod "var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 22 23:58:13.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "var-expansion-4649" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":61,"completed":22,"skipped":2405,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-windows] [Feature:Windows] GMSA Kubelet [Slow][0m [90mkubelet GMSA support[0m [0mwhen creating a pod with correct GMSA credential specs[0m [1mpasses the credential specs down to the Pod's containers[0m [37mtest/e2e/windows/gmsa_kubelet.go:45[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] ... skipping 21 lines ... Jan 22 23:58:23.410: INFO: stderr: "" Jan 22 23:58:23.410: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n" [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:188 Jan 22 23:58:23.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gmsa-kubelet-test-windows-3956" for this suite. [32m•[0m{"msg":"PASSED [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] kubelet GMSA support when creating a pod with correct GMSA credential specs passes the credential specs down to the Pod's containers","total":61,"completed":23,"skipped":2632,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Daemon set [Serial][0m [1mshould list and delete a collection of DaemonSets [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] Daemon set [Serial] ... skipping 37 lines ... Jan 22 23:58:29.271: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"13635"},"items":[{"metadata":{"name":"daemon-set-csl9w","generateName":"daemon-set-","namespace":"daemonsets-9412","uid":"5f65b1be-96d0-4f14-a052-c221bc1a42f0","resourceVersion":"13635","creationTimestamp":"2023-01-22T23:58:23Z","deletionTimestamp":"2023-01-22T23:58:59Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"f808545888da8fa34ef50c7342af3a9a31bce07aa7f2b4d958a2faca6b326473","cni.projectcalico.org/podIP":"192.168.14.42/32","cni.projectcalico.org/podIPs":"192.168.14.42/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"70ca2e84-ac83-40e0-93a5-87fafa3b28f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70ca2e84-ac83-40e0-93a5-87fafa3b28f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-bp6l8","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-bp6l8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-2xrmj","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-2xrmj"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:28Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:28Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"}],"hostIP":"10.1.0.5","podIP":"192.168.14.42","podIPs":[{"ip":"192.168.14.42"}],"startTime":"2023-01-22T23:58:23Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-22T23:58:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://e2ca4f87e85b1f8de4ebabe8b53846b496bab83db74eee1e9b34bdb3d9ca60d4","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-lpchl","generateName":"daemon-set-","namespace":"daemonsets-9412","uid":"0c8eff20-b12b-47e5-8a27-adc41c9c9751","resourceVersion":"13634","creationTimestamp":"2023-01-22T23:58:23Z","deletionTimestamp":"2023-01-22T23:58:59Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"d349259a8543860d0148506b5972fe6c52c93c43645616144e5c179f45d6e5c4","cni.projectcalico.org/podIP":"192.168.198.34/32","cni.projectcalico.org/podIPs":"192.168.198.34/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"70ca2e84-ac83-40e0-93a5-87fafa3b28f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70ca2e84-ac83-40e0-93a5-87fafa3b28f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.198.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-xm2h5","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-xm2h5","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-96jhk","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-96jhk"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:27Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:27Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"}],"hostIP":"10.1.0.4","podIP":"192.168.198.34","podIPs":[{"ip":"192.168.198.34"}],"startTime":"2023-01-22T23:58:23Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-22T23:58:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://11f00f8d177ab9ad982484e50b8cd6d456d7f35aeddbb98006233e4be238a22b","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 22 23:58:29.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "daemonsets-9412" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":61,"completed":24,"skipped":2714,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)[0m [90m[Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case)[0m [1mShould scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container[0m [37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:98[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ... skipping 81 lines ... [90mtest/e2e/autoscaling/framework.go:23[0m [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:96[0m Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:98[0m [90m------------------------------[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container","total":61,"completed":25,"skipped":2871,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Namespaces [Serial][0m [1mshould patch a Namespace [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] ... skipping 10 lines ... [1mSTEP[0m: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:188 Jan 23 00:01:10.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "namespaces-6010" for this suite. [1mSTEP[0m: Destroying namespace "nspatchtest-2837c980-446e-4fce-9b28-09f45d9af33c-8325" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":61,"completed":26,"skipped":2943,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-scheduling] SchedulerPredicates [Serial][0m [1mvalidates resource limits of pods that are allowed to run [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] ... skipping 80 lines ... [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 23 00:01:19.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sched-pred-1217" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 [32m•[0m{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":61,"completed":27,"skipped":3098,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] CronJob[0m [1mshould not schedule jobs when suspended [Slow] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] CronJob ... skipping 17 lines ... [32m• [SLOW TEST:300.479 seconds][0m [sig-apps] CronJob [90mtest/e2e/apps/framework.go:23[0m should not schedule jobs when suspended [Slow] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":61,"completed":28,"skipped":3127,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] StatefulSet[0m [90mBasic StatefulSet functionality [StatefulSetBasic][0m [1mScaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] StatefulSet ... skipping 104 lines ... Jan 23 00:07:56.163: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 00:07:56.196: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:188 Jan 23 00:07:56.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "statefulset-8983" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":61,"completed":29,"skipped":3577,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow][0m [90mGMSA support[0m [1mcan read and write file to remote SMB folder[0m [37mtest/e2e/windows/gmsa_full.go:167[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 87 lines ... For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 00:08:09.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-9386" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":61,"completed":30,"skipped":3666,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould cap back-off at MaxContainerBackOff [Slow][NodeConformance][0m [37mtest/e2e/common/node/pods.go:723[0m [BeforeEach] [sig-node] Pods ... skipping 28 lines ... [32m• [SLOW TEST:1631.415 seconds][0m [sig-node] Pods [90mtest/e2e/common/node/framework.go:23[0m should cap back-off at MaxContainerBackOff [Slow][NodeConformance] [90mtest/e2e/common/node/pods.go:723[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":61,"completed":31,"skipped":3717,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)[0m [90m[Serial] [Slow] ReplicationController[0m [1mShould scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability[0m [37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:61[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ... skipping 326 lines ... [90mtest/e2e/autoscaling/framework.go:23[0m [Serial] [Slow] ReplicationController [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:59[0m Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:61[0m [90m------------------------------[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","total":61,"completed":32,"skipped":3759,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)[0m [90m[Serial] [Slow] ReplicationController[0m [1mShould scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability[0m [37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:64[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ... skipping 454 lines ... [90mtest/e2e/autoscaling/framework.go:23[0m [Serial] [Slow] ReplicationController [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:59[0m Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:64[0m [90m------------------------------[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","total":61,"completed":33,"skipped":4094,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] StatefulSet[0m [90mBasic StatefulSet functionality [StatefulSetBasic][0m [1mBurst scaling should run to completion even with unhealthy pods [Slow] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] StatefulSet ... skipping 121 lines ... Jan 23 01:10:57.933: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 01:10:57.965: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:188 Jan 23 01:10:58.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "statefulset-8264" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":61,"completed":34,"skipped":4098,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould not be blocked by dependency circle [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 9 lines ... Jan 23 01:10:58.569: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"acf187d3-d32e-4c0b-92fe-704733e1c609", Controller:(*bool)(0xc002703fd6), BlockOwnerDeletion:(*bool)(0xc002703fd7)}} Jan 23 01:10:58.607: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2456750a-f30f-4480-ae2c-4452b083b784", Controller:(*bool)(0xc0005b37f6), BlockOwnerDeletion:(*bool)(0xc0005b37f7)}} [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:11:03.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-1342" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":61,"completed":35,"skipped":4105,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-scheduling] SchedulerPreemption [Serial][0m [90mPriorityClass endpoints[0m [1mverify PriorityClass endpoints can be operated with different HTTP methods [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] ... skipping 29 lines ... [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 23 01:12:05.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sched-preemption-6551" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [32m•[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":61,"completed":36,"skipped":4238,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] CronJob[0m [1mshould not schedule new jobs when ForbidConcurrent [Slow] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] CronJob ... skipping 19 lines ... [32m• [SLOW TEST:356.559 seconds][0m [sig-apps] CronJob [90mtest/e2e/apps/framework.go:23[0m should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":61,"completed":37,"skipped":4266,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)[0m [90m[Serial] [Slow] ReplicaSet[0m [1mShould scale from 1 pod to 3 pods and from 3 to 5[0m [37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:50[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ... skipping 53 lines ... Jan 23 01:19:40.479: INFO: Deleting ReplicationController rs-ctrl took: 35.547538ms Jan 23 01:19:40.579: INFO: Terminating ReplicationController rs-ctrl pods took: 100.516538ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 23 01:19:42.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "horizontal-pod-autoscaling-5586" for this suite. [32m•[0m{"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5","total":61,"completed":38,"skipped":4484,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Variable Expansion[0m [1mshould fail substituting values in a volume subpath with backticks [Slow] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 23 01:19:42.415: INFO: >>> kubeConfig: /tmp/kubeconfig [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/framework/framework.go:652 Jan 23 01:19:46.748: INFO: Deleting pod "var-expansion-07df5213-61e4-4a88-b139-eb0b322b3a1c" in namespace "var-expansion-7276" Jan 23 01:19:46.786: INFO: Wait up to 5m0s for pod "var-expansion-07df5213-61e4-4a88-b139-eb0b322b3a1c" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 23 01:19:48.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "var-expansion-7276" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":61,"completed":39,"skipped":4564,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] NoExecuteTaintManager Multiple Pods [Serial][0m [1mevicts pods with minTolerationSeconds [Disruptive] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] ... skipping 20 lines ... Jan 23 01:21:22.906: INFO: Noticed Pod "taint-eviction-b2" gets evicted. [1mSTEP[0m: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] test/e2e/framework/framework.go:188 Jan 23 01:21:23.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "taint-multiple-pods-6775" for this suite. [32m•[0m{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":61,"completed":40,"skipped":4574,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould have their auto-restart back-off timer reset on image update [Slow][NodeConformance][0m [37mtest/e2e/common/node/pods.go:682[0m [BeforeEach] [sig-node] Pods ... skipping 29 lines ... [32m• [SLOW TEST:406.113 seconds][0m [sig-node] Pods [90mtest/e2e/common/node/framework.go:23[0m should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] [90mtest/e2e/common/node/pods.go:682[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":61,"completed":41,"skipped":4575,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-windows] [Feature:Windows] Cpu Resources [Serial][0m [90mContainer limits[0m [1mshould not be exceeded after waiting 2 minutes[0m [37mtest/e2e/windows/cpu_limits.go:43[0m [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] ... skipping 34 lines ... [90mtest/e2e/windows/framework.go:27[0m Container limits [90mtest/e2e/windows/cpu_limits.go:42[0m should not be exceeded after waiting 2 minutes [90mtest/e2e/windows/cpu_limits.go:43[0m [90m------------------------------[0m {"msg":"PASSED [sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","total":61,"completed":42,"skipped":4840,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-scheduling] SchedulerPreemption [Serial][0m [90mPreemptionExecutionPath[0m [1mruns ReplicaSets to verify preemption running path [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] ... skipping 34 lines ... [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 23 01:31:53.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sched-preemption-2003" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [32m•[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":61,"completed":43,"skipped":4908,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould delete jobs and pods created by cronjob[0m [37mtest/e2e/apimachinery/garbage_collector.go:1145[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 35 lines ... For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:32:00.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-2028" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":61,"completed":44,"skipped":4947,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow][0m [90mAllocatable node memory[0m [1mshould be equal to a calculated allocatable memory value[0m [37mtest/e2e/windows/memory_limits.go:54[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] ... skipping 14 lines ... Jan 23 01:32:01.366: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}} [1mSTEP[0m: Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>} BinarySI} [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 23 01:32:01.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "memory-limit-test-windows-9668" for this suite. [32m•[0m{"msg":"PASSED [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] Allocatable node memory should be equal to a calculated allocatable memory value","total":61,"completed":45,"skipped":5211,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 89 lines ... Jan 23 01:32:21.606: INFO: Deleting pod "simpletest-rc-to-be-deleted-gvhc6" in namespace "gc-8821" Jan 23 01:32:21.653: INFO: Deleting pod "simpletest-rc-to-be-deleted-gxr2s" in namespace "gc-8821" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:32:21.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-8821" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":61,"completed":46,"skipped":5359,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-scheduling] SchedulerPredicates [Serial][0m [1mvalidates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] ... skipping 37 lines ... Jan 23 01:32:22.214: INFO: Container csi-proxy ready: true, restart count 0 Jan 23 01:32:22.214: INFO: kube-proxy-windows-mrr95 from kube-system started at 2023-01-22 23:19:35 +0000 UTC (1 container statuses recorded) Jan 23 01:32:22.214: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/framework/framework.go:652 [1mSTEP[0m: Trying to launch a pod without a label to get a node which can launch it. Jan 23 01:33:22.357: FAIL: Unexpected error: <*errors.errorString | 0xc00021c1e0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred ... skipping 138 lines ... [91m[1m• Failure [65.776 seconds][0m [sig-scheduling] SchedulerPredicates [Serial] [90mtest/e2e/scheduling/framework.go:40[0m [91m[1mvalidates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] [It][0m [90mtest/e2e/framework/framework.go:652[0m [91mJan 23 01:33:22.357: Unexpected error: <*errors.errorString | 0xc00021c1e0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred[0m ... skipping 16 lines ... test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000503040, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [90m------------------------------[0m {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":61,"completed":46,"skipped":5380,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-scheduling] SchedulerPredicates [Serial][0m [1mvalidates that NodeSelector is respected if matching [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] ... skipping 51 lines ... [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 23 01:33:46.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sched-pred-568" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 [32m•[0m{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":61,"completed":47,"skipped":5460,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-scheduling] SchedulerPreemption [Serial][0m [90mPodTopologySpread Preemption[0m [1mvalidates proper pods are preempted[0m [37mtest/e2e/scheduling/preemption.go:355[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] ... skipping 34 lines ... [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 23 01:35:34.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sched-preemption-602" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [32m•[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":61,"completed":48,"skipped":5485,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould support orphan deletion of custom resources[0m [37mtest/e2e/apimachinery/garbage_collector.go:1040[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 11 lines ... [1mSTEP[0m: wait for the owner to be deleted [1mSTEP[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:36:37.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-6911" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":61,"completed":49,"skipped":5587,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould orphan pods created by rc if deleteOptions.OrphanDependents is nil[0m [37mtest/e2e/apimachinery/garbage_collector.go:439[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 36 lines ... Jan 23 01:37:13.903: INFO: Deleting pod "simpletest.rc-fqltg" in namespace "gc-4646" Jan 23 01:37:13.943: INFO: Deleting pod "simpletest.rc-v49pq" in namespace "gc-4646" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:37:13.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-4646" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":61,"completed":50,"skipped":5625,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] NoExecuteTaintManager Single Pod [Serial][0m [1mremoving taint cancels eviction [Disruptive] [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] ... skipping 28 lines ... [32m• [SLOW TEST:135.859 seconds][0m [sig-node] NoExecuteTaintManager Single Pod [Serial] [90mtest/e2e/node/framework.go:23[0m removing taint cancels eviction [Disruptive] [Conformance] [90mtest/e2e/framework/framework.go:652[0m [90m------------------------------[0m {"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":61,"completed":51,"skipped":5804,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-scheduling] SchedulerPredicates [Serial][0m [1mvalidates that NodeSelector is respected if not matching [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] ... skipping 47 lines ... [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 23 01:39:31.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sched-pred-3449" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 [32m•[0m{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":61,"completed":52,"skipped":5947,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Daemon set [Serial][0m [1mshould verify changes to a daemon set status [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-apps] Daemon set [Serial] ... skipping 61 lines ... Jan 23 01:39:42.686: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"36634"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 23 01:39:42.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "daemonsets-8567" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":61,"completed":53,"skipped":6016,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-scheduling] SchedulerPreemption [Serial][0m [1mvalidates basic preemption works [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] ... skipping 19 lines ... [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Jan 23 01:41:06.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sched-preemption-3580" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [32m•[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":61,"completed":54,"skipped":6168,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould delete pods created by rc when not orphaning [Conformance][0m [37mtest/e2e/framework/framework.go:652[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 34 lines ... For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 23 01:41:17.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-854" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":61,"completed":55,"skipped":6207,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)[0m [90m[Serial] [Slow] Deployment[0m [1mShould scale from 5 pods to 3 pods and from 3 to 1[0m [37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:43[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ... skipping 208 lines ... [90mtest/e2e/autoscaling/framework.go:23[0m [Serial] [Slow] Deployment [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:38[0m Should scale from 5 pods to 3 pods and from 3 to 1 [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:43[0m [90m------------------------------[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","total":61,"completed":56,"skipped":6286,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow][0m [90mattempt to deploy past allocatable memory limits[0m [1mshould fail deployments of pods once there isn't enough memory[0m [37mtest/e2e/windows/memory_limits.go:60[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 23 01:52:38.031: INFO: >>> kubeConfig: /tmp/kubeconfig [1mSTEP[0m: Building a namespace api object, basename memory-limit-test-windows [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Jan 23 01:52:38.440: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 23 01:52:38.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "memory-limit-test-windows-4139" for this suite. [32m•[0m{"msg":"PASSED [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits should fail deployments of pods once there isn't enough memory","total":61,"completed":57,"skipped":6682,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)[0m [90m[Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case)[0m [1mShould not scale up on a busy sidecar with an idle application[0m [37mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:103[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ... skipping 94 lines ... [90mtest/e2e/autoscaling/framework.go:23[0m [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:96[0m Should not scale up on a busy sidecar with an idle application [90mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:103[0m [90m------------------------------[0m {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application","total":61,"completed":58,"skipped":6826,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0mJan 23 01:56:33.489: INFO: Running AfterSuite actions on all nodes Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 Jan 23 01:56:33.489: INFO: Running AfterSuite actions on node 1 Jan 23 01:56:33.489: INFO: Skipping dumping logs from cluster JUnit report was created: /output/junit_kubetest.01.xml {"msg":"Test Suite completed","total":61,"completed":58,"skipped":6914,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90m[sig-scheduling] SchedulerPredicates [Serial] [0m[91m[1m[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] [0m [37mtest/e2e/scheduling/predicates.go:883[0m [1m[91mRan 59 of 6973 Specs in 9269.842 seconds[0m [1m[91mFAIL![0m -- [32m[1m58 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m6914 Skipped[0m --- FAIL: TestE2E (9272.52s) FAIL Ginkgo ran 1 suite in 2h34m32.687547334s Test Suite Failed [38;5;9m[FAILED][0m in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 [38;5;243m@ 01/23/23 01:56:33.91[0m Jan 23 01:56:33.912: INFO: FAILED! Jan 23 01:56:33.915: INFO: Cleaning up after "Conformance Tests conformance-tests" spec [1mSTEP:[0m Dumping logs from the "capz-conf-zs64h3" workload cluster [38;5;243m@ 01/23/23 01:56:33.915[0m Jan 23 01:56:33.915: INFO: Dumping workload cluster capz-conf-zs64h3/capz-conf-zs64h3 logs Jan 23 01:56:34.027: INFO: Collecting logs for Linux node capz-conf-zs64h3-control-plane-dlccj in cluster capz-conf-zs64h3 in namespace capz-conf-zs64h3 Jan 23 01:57:12.118: INFO: Collecting boot logs for AzureMachine capz-conf-zs64h3-control-plane-dlccj Jan 23 01:57:13.070: INFO: Collecting logs for Windows node capz-conf-96jhk in cluster capz-conf-zs64h3 in namespace capz-conf-zs64h3 Jan 23 01:59:13.743: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-96jhk to /logs/artifacts/clusters/capz-conf-zs64h3/machines/capz-conf-zs64h3-md-win-67dfd985d8-q88x8/crashdumps.tar Jan 23 01:59:15.369: INFO: Collecting boot logs for AzureMachine capz-conf-zs64h3-md-win-96jhk Failed to get logs for Machine capz-conf-zs64h3-md-win-67dfd985d8-q88x8, Cluster capz-conf-zs64h3/capz-conf-zs64h3: running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1 Jan 23 01:59:16.404: INFO: Collecting logs for Windows node capz-conf-2xrmj in cluster capz-conf-zs64h3 in namespace capz-conf-zs64h3 Jan 23 02:01:16.071: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-2xrmj to /logs/artifacts/clusters/capz-conf-zs64h3/machines/capz-conf-zs64h3-md-win-67dfd985d8-q945m/crashdumps.tar Jan 23 02:01:17.701: INFO: Collecting boot logs for AzureMachine capz-conf-zs64h3-md-win-2xrmj Failed to get logs for Machine capz-conf-zs64h3-md-win-67dfd985d8-q945m, Cluster capz-conf-zs64h3/capz-conf-zs64h3: running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1 Jan 23 02:01:18.900: INFO: Dumping workload cluster capz-conf-zs64h3/capz-conf-zs64h3 kube-system pod logs Jan 23 02:01:19.258: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-7f7758c56-4445b, container calico-apiserver Jan 23 02:01:19.258: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-7f7758c56-gzr5r, container calico-apiserver Jan 23 02:01:19.258: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-7f7758c56-4445b Jan 23 02:01:19.258: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-7f7758c56-gzr5r Jan 23 02:01:19.293: INFO: Collecting events for Pod calico-system/calico-kube-controllers-594d54f99-r76g2 ... skipping 69 lines ... INFO: Waiting for the Cluster capz-conf-zs64h3/capz-conf-zs64h3 to be deleted [1mSTEP:[0m Waiting for cluster capz-conf-zs64h3 to be deleted [38;5;243m@ 01/23/23 02:01:21.282[0m Jan 23 02:08:11.578: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-zs64h3 Jan 23 02:08:11.598: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/23/23 02:08:12.295[0m [38;5;9m• [FAILED] [10593.603 seconds][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100[0m [38;5;9m[FAILED] Unexpected error: <*errors.withStack | 0xc0009ac150>: { error: <*errors.withMessage | 0xc000956820>{ cause: <*errors.errorString | 0xc0004f4b60>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x3143379, 0x353bac7, 0x18e62fb, 0x18f9df8, 0x147c741], } Unable to run conformance tests: error container run failed with exit code 1 occurred[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238[0m [38;5;243m@ 01/23/23 01:56:33.91[0m [38;5;9mFull Stack Trace[0m sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func3.2() /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 +0x18fa ... skipping 8 lines ... [0m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;10m[ReportAfterSuite] PASSED [0.026 seconds][0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238[0m [38;5;9m[1mRan 1 of 23 Specs in 10744.654 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m22 Skipped[0m --- FAIL: TestE2E (10744.68s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:278[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.6.0[0m Ginkgo ran 1 suite in 3h1m34.864876414s Test Suite Failed make[3]: *** [Makefile:655: test-e2e-run] Error 1 make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: *** [Makefile:670: test-e2e-skip-push] Error 2 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[1]: *** [Makefile:686: test-conformance] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:696: test-windows-upstream] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 6 lines ...