Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 4h28m |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sConformance\sTests\sconformance\-tests$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 Unexpected error: <*errors.withStack | 0xc00256abe8>: { error: <*errors.withMessage | 0xc0016a0240>{ cause: <*errors.errorString | 0xc0004578e0>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x2eef138, 0x32e7267, 0x1876bd7, 0x32e7033, 0x14384a5, 0x143799c, 0x187843c, 0x1879451, 0x1878e45, 0x18784db, 0x187e769, 0x187e152, 0x188aa51, 0x188a776, 0x1889dc5, 0x188c485, 0x1899ce9, 0x1899afe, 0x32ec1d8, 0x148dc0b, 0x13c57c1], } Unable to run conformance tests: error container run failed with exit code 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:232
[BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:55 INFO: Cluster name is capz-conf-gdu8bn �[1mSTEP�[0m: Creating namespace "capz-conf-gdu8bn" for hosting the cluster Nov 6 00:58:55.428: INFO: starting to create namespace for hosting the "capz-conf-gdu8bn" test spec INFO: Creating namespace capz-conf-gdu8bn INFO: Creating event watcher for namespace "capz-conf-gdu8bn" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:99 INFO: Creating the workload cluster with name "capz-conf-gdu8bn" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.26.0-alpha.3.239+1f9e20eb8617e3, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-gdu8bn --infrastructure (default) --kubernetes-version v1.26.0-alpha.3.239+1f9e20eb8617e3 --control-plane-machine-count 1 --worker-machine-count 0 --flavor conformance-ci-artifacts-windows-containerd INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-conf-gdu8bn/capz-conf-gdu8bn-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-gdu8bn/capz-conf-gdu8bn-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready �[1mSTEP�[0m: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist �[1mSTEP�[0m: Checking all the machines controlled by capz-conf-gdu8bn-md-0 are in the "<None>" failure domain �[1mSTEP�[0m: Waiting for the workload nodes to exist �[1mSTEP�[0m: Checking all the machines controlled by capz-conf-gdu8bn-md-win are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned INFO: Using repo-list '' for version 'v1.26.0-alpha.3.239+1f9e20eb8617e3' �[1mSTEP�[0m: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-nodes=1" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--report-prefix=kubetest." "--num-nodes=2" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "-ginkgo.v=true" "-prepull-images=true" "-dump-logs-on-failure=true" "-ginkgo.flakeAttempts=0" "-ginkgo.focus=(\\[sig-windows\\]|\\[sig-scheduling\\].SchedulerPreemption|\\[sig-autoscaling\\].\\[Feature:HPA\\]|\\[sig-apps\\].CronJob).*(\\[Serial\\]|\\[Slow\\])|(\\[Serial\\]|\\[Slow\\]).*(\\[Conformance\\]|\\[NodeConformance\\])|\\[sig-api-machinery\\].Garbage.collector" "-ginkgo.progress=true" "-ginkgo.slow-spec-threshold=120s" "-disable-log-dump=true" "-ginkgo.skip=\\[LinuxOnly\\]|\\[Excluded:WindowsDocker\\]|device.plugin.for.Windows" "-ginkgo.timeout=4h" "-ginkgo.trace=true" "-node-os-distro=windows"] I1106 01:06:06.107686 14 e2e.go:125] Starting e2e run "d94189a6-1d35-43ff-b4a0-56464dc251f8" on Ginkgo node 1 Nov 6 01:06:06.125: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: �[1m1667696766�[0m - will randomize all specs Will run �[1m81�[0m of �[1m6609�[0m specs �[38;5;243m------------------------------�[0m �[1m[SynchronizedBeforeSuite] �[0m �[38;5;243mtest/e2e/e2e.go:76�[0m [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Nov 6 01:06:06.368: INFO: >>> kubeConfig: /tmp/kubeconfig Nov 6 01:06:06.370: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 6 01:06:06.560: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 6 01:06:06.674: INFO: The status of Pod calico-node-windows-hsdvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Nov 6 01:06:06.674: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 6 01:06:06.675: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Nov 6 01:06:06.675: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 01:06:06.675: INFO: calico-node-windows-hsdvh capz-conf-ppc2q Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:05:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:06:04 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:06:04 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:05:08 +0000 UTC }] Nov 6 01:06:06.675: INFO: Nov 6 01:06:08.786: INFO: The status of Pod calico-node-windows-hsdvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Nov 6 01:06:08.786: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Nov 6 01:06:08.786: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Nov 6 01:06:08.786: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 01:06:08.786: INFO: calico-node-windows-hsdvh capz-conf-ppc2q Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:05:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:06:04 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:06:04 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:05:08 +0000 UTC }] Nov 6 01:06:08.786: INFO: Nov 6 01:06:10.799: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Nov 6 01:06:10.799: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Nov 6 01:06:10.799: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 6 01:06:10.849: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) Nov 6 01:06:10.849: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node-windows' (0 seconds elapsed) Nov 6 01:06:10.849: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'containerd-logger' (0 seconds elapsed) Nov 6 01:06:10.849: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-proxy' (0 seconds elapsed) Nov 6 01:06:10.850: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 6 01:06:10.850: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-windows' (0 seconds elapsed) Nov 6 01:06:10.850: INFO: Pre-pulling images so that they are cached for the tests. Nov 6 01:06:11.114: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Nov 6 01:06:11.153: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:11.202: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 6 01:06:11.202: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:06:20.244: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:20.291: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 6 01:06:20.291: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:06:29.246: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:29.289: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 6 01:06:29.289: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:06:38.239: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:38.283: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 2 Nov 6 01:06:38.283: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Nov 6 01:06:38.283: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Nov 6 01:06:38.318: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:38.361: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2: 2 Nov 6 01:06:38.361: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Nov 6 01:06:38.361: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Nov 6 01:06:38.396: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:38.440: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 1 Nov 6 01:06:38.440: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:06:47.476: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:47.519: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 2 Nov 6 01:06:47.519: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Nov 6 01:06:47.519: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Nov 6 01:06:47.554: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:47.597: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2: 2 Nov 6 01:06:47.598: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Nov 6 01:06:47.630: INFO: e2e test version: v1.26.0-alpha.3.239+1f9e20eb8617e3 Nov 6 01:06:47.657: INFO: kube-apiserver version: v1.26.0-alpha.3.239+1f9e20eb8617e3 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Nov 6 01:06:47.657: INFO: >>> kubeConfig: /tmp/kubeconfig Nov 6 01:06:47.687: INFO: Cluster IP family: ipv4 �[38;5;243m------------------------------�[0m �[38;5;10m[SynchronizedBeforeSuite] PASSED [41.319 seconds]�[0m [SynchronizedBeforeSuite] �[38;5;243mtest/e2e/e2e.go:76�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Nov 6 01:06:06.368: INFO: >>> kubeConfig: /tmp/kubeconfig Nov 6 01:06:06.370: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 6 01:06:06.560: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 6 01:06:06.674: INFO: The status of Pod calico-node-windows-hsdvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Nov 6 01:06:06.674: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 6 01:06:06.675: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Nov 6 01:06:06.675: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 01:06:06.675: INFO: calico-node-windows-hsdvh capz-conf-ppc2q Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:05:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:06:04 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:06:04 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:05:08 +0000 UTC }] Nov 6 01:06:06.675: INFO: Nov 6 01:06:08.786: INFO: The status of Pod calico-node-windows-hsdvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Nov 6 01:06:08.786: INFO: 17 / 18 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Nov 6 01:06:08.786: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Nov 6 01:06:08.786: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 01:06:08.786: INFO: calico-node-windows-hsdvh capz-conf-ppc2q Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:05:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:06:04 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:06:04 +0000 UTC ContainersNotReady containers with unready status: [calico-node-felix]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-06 01:05:08 +0000 UTC }] Nov 6 01:06:08.786: INFO: Nov 6 01:06:10.799: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Nov 6 01:06:10.799: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. Nov 6 01:06:10.799: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 6 01:06:10.849: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) Nov 6 01:06:10.849: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node-windows' (0 seconds elapsed) Nov 6 01:06:10.849: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'containerd-logger' (0 seconds elapsed) Nov 6 01:06:10.849: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'csi-proxy' (0 seconds elapsed) Nov 6 01:06:10.850: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 6 01:06:10.850: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-windows' (0 seconds elapsed) Nov 6 01:06:10.850: INFO: Pre-pulling images so that they are cached for the tests. Nov 6 01:06:11.114: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Nov 6 01:06:11.153: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:11.202: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 6 01:06:11.202: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:06:20.244: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:20.291: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 6 01:06:20.291: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:06:29.246: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:29.289: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 0 Nov 6 01:06:29.289: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:06:38.239: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:38.283: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40: 2 Nov 6 01:06:38.283: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-agnhost-2.40 Nov 6 01:06:38.283: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Nov 6 01:06:38.318: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:38.361: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2: 2 Nov 6 01:06:38.361: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-busybox-1.29-2 Nov 6 01:06:38.361: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Nov 6 01:06:38.396: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:38.440: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 1 Nov 6 01:06:38.440: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:06:47.476: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:47.519: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2: 2 Nov 6 01:06:47.519: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-httpd-2.4.38-2 Nov 6 01:06:47.519: INFO: Waiting for img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Nov 6 01:06:47.554: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:06:47.597: INFO: Number of nodes with available pods controlled by daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2: 2 Nov 6 01:06:47.598: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset img-pull-registry.k8s.io-e2e-test-images-nginx-1.14-2 Nov 6 01:06:47.630: INFO: e2e test version: v1.26.0-alpha.3.239+1f9e20eb8617e3 Nov 6 01:06:47.657: INFO: kube-apiserver version: v1.26.0-alpha.3.239+1f9e20eb8617e3 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Nov 6 01:06:47.657: INFO: >>> kubeConfig: /tmp/kubeconfig Nov 6 01:06:47.687: INFO: Cluster IP family: ipv4 �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan pods created by rc if deleteOptions.OrphanDependents is nil�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:439�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:06:47.729�[0m Nov 6 01:06:47.729: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/06/22 01:06:47.73�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:06:47.816�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:06:47.869�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should orphan pods created by rc if deleteOptions.OrphanDependents is nil test/e2e/apimachinery/garbage_collector.go:439 �[1mSTEP:�[0m create the rc �[38;5;243m11/06/22 01:06:47.923�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m11/06/22 01:06:52.985�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[38;5;243m11/06/22 01:06:53.021�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/06/22 01:07:23.055�[0m Nov 6 01:07:23.167: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t" in namespace "kube-system" to be "running and ready" Nov 6 01:07:23.199: INFO: Pod "kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t": Phase="Running", Reason="", readiness=true. Elapsed: 32.186697ms Nov 6 01:07:23.199: INFO: The phase of Pod kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t is Running (Ready = true) Nov 6 01:07:23.199: INFO: Pod "kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t" satisfied condition "running and ready" Nov 6 01:07:23.562: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Nov 6 01:07:23.562: INFO: Deleting pod "simpletest.rc-56qxq" in namespace "gc-9146" Nov 6 01:07:23.610: INFO: Deleting pod "simpletest.rc-g9htq" in namespace "gc-9146" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 6 01:07:23.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-9146" for this suite. �[38;5;243m11/06/22 01:07:23.692�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [36.000 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should orphan pods created by rc if deleteOptions.OrphanDependents is nil �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:439�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:06:47.729�[0m Nov 6 01:06:47.729: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/06/22 01:06:47.73�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:06:47.816�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:06:47.869�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should orphan pods created by rc if deleteOptions.OrphanDependents is nil test/e2e/apimachinery/garbage_collector.go:439 �[1mSTEP:�[0m create the rc �[38;5;243m11/06/22 01:06:47.923�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m11/06/22 01:06:52.985�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[38;5;243m11/06/22 01:06:53.021�[0m �[1mSTEP:�[0m Gathering metrics �[38;5;243m11/06/22 01:07:23.055�[0m Nov 6 01:07:23.167: INFO: Waiting up to 5m0s for pod "kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t" in namespace "kube-system" to be "running and ready" Nov 6 01:07:23.199: INFO: Pod "kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t": Phase="Running", Reason="", readiness=true. Elapsed: 32.186697ms Nov 6 01:07:23.199: INFO: The phase of Pod kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t is Running (Ready = true) Nov 6 01:07:23.199: INFO: Pod "kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t" satisfied condition "running and ready" Nov 6 01:07:23.562: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Nov 6 01:07:23.562: INFO: Deleting pod "simpletest.rc-56qxq" in namespace "gc-9146" Nov 6 01:07:23.610: INFO: Deleting pod "simpletest.rc-g9htq" in namespace "gc-9146" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 6 01:07:23.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-9146" for this suite. �[38;5;243m11/06/22 01:07:23.692�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould apply changes to a namespace status [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/namespace.go:299�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:07:23.732�[0m Nov 6 01:07:23.732: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m11/06/22 01:07:23.733�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:07:23.831�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:07:23.891�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:299 �[1mSTEP:�[0m Read namespace status �[38;5;243m11/06/22 01:07:23.951�[0m Nov 6 01:07:23.984: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} �[1mSTEP:�[0m Patch namespace status �[38;5;243m11/06/22 01:07:23.984�[0m Nov 6 01:07:24.023: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} �[1mSTEP:�[0m Update namespace status �[38;5;243m11/06/22 01:07:24.023�[0m Nov 6 01:07:24.094: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:07:24.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "namespaces-4798" for this suite. �[38;5;243m11/06/22 01:07:24.129�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [0.434 seconds]�[0m [sig-api-machinery] Namespaces [Serial] �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should apply changes to a namespace status [Conformance] �[38;5;243mtest/e2e/apimachinery/namespace.go:299�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:07:23.732�[0m Nov 6 01:07:23.732: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m11/06/22 01:07:23.733�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:07:23.831�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:07:23.891�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:299 �[1mSTEP:�[0m Read namespace status �[38;5;243m11/06/22 01:07:23.951�[0m Nov 6 01:07:23.984: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} �[1mSTEP:�[0m Patch namespace status �[38;5;243m11/06/22 01:07:23.984�[0m Nov 6 01:07:24.023: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} �[1mSTEP:�[0m Update namespace status �[38;5;243m11/06/22 01:07:24.023�[0m Nov 6 01:07:24.094: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:07:24.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "namespaces-4798" for this suite. �[38;5;243m11/06/22 01:07:24.129�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould fail substituting values in a volume subpath with absolute path [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/common/node/expansion.go:186�[0m [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:07:24.17�[0m Nov 6 01:07:24.171: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m11/06/22 01:07:24.171�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:07:24.273�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:07:24.333�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Nov 6 01:07:24.434: INFO: Waiting up to 2m0s for pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9" in namespace "var-expansion-7721" to be "container 0 failed with reason CreateContainerConfigError" Nov 6 01:07:24.470: INFO: Pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.77615ms Nov 6 01:07:26.502: INFO: Pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06756459s Nov 6 01:07:28.508: INFO: Pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07336665s Nov 6 01:07:28.508: INFO: Pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 6 01:07:28.508: INFO: Deleting pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9" in namespace "var-expansion-7721" Nov 6 01:07:28.545: INFO: Wait up to 5m0s for pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 6 01:07:30.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "var-expansion-7721" for this suite. �[38;5;243m11/06/22 01:07:30.643�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [6.510 seconds]�[0m [sig-node] Variable Expansion �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] �[38;5;243mtest/e2e/common/node/expansion.go:186�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:07:24.17�[0m Nov 6 01:07:24.171: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m11/06/22 01:07:24.171�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:07:24.273�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:07:24.333�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Nov 6 01:07:24.434: INFO: Waiting up to 2m0s for pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9" in namespace "var-expansion-7721" to be "container 0 failed with reason CreateContainerConfigError" Nov 6 01:07:24.470: INFO: Pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.77615ms Nov 6 01:07:26.502: INFO: Pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06756459s Nov 6 01:07:28.508: INFO: Pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07336665s Nov 6 01:07:28.508: INFO: Pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 6 01:07:28.508: INFO: Deleting pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9" in namespace "var-expansion-7721" Nov 6 01:07:28.545: INFO: Wait up to 5m0s for pod "var-expansion-1e437e7b-4e78-41f9-967c-1462a2d955b9" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 6 01:07:30.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "var-expansion-7721" for this suite. �[38;5;243m11/06/22 01:07:30.643�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould support cascading deletion of custom resources�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:905�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:07:30.694�[0m Nov 6 01:07:30.694: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/06/22 01:07:30.696�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:07:30.793�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:07:30.853�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Nov 6 01:07:30.914: INFO: >>> kubeConfig: /tmp/kubeconfig Nov 6 01:07:33.144: INFO: created owner resource "ownerj44qz" Nov 6 01:07:33.187: INFO: created dependent resource "dependentwn2q7" Nov 6 01:07:33.256: INFO: created canary resource "canaryjnplg" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 6 01:08:03.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-6211" for this suite. �[38;5;243m11/06/22 01:08:03.528�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [32.870 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should support cascading deletion of custom resources �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:905�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:07:30.694�[0m Nov 6 01:07:30.694: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/06/22 01:07:30.696�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:07:30.793�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:07:30.853�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Nov 6 01:07:30.914: INFO: >>> kubeConfig: /tmp/kubeconfig Nov 6 01:07:33.144: INFO: created owner resource "ownerj44qz" Nov 6 01:07:33.187: INFO: created dependent resource "dependentwn2q7" Nov 6 01:07:33.256: INFO: created canary resource "canaryjnplg" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 6 01:08:03.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-6211" for this suite. �[38;5;243m11/06/22 01:08:03.528�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that NodeSelector is respected if matching [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/predicates.go:461�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:08:03.577�[0m Nov 6 01:08:03.577: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m11/06/22 01:08:03.578�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:08:03.675�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:08:03.736�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Nov 6 01:08:03.808: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 6 01:08:03.879: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:08:03.907: INFO: Logging pods the apiserver thinks is on node capz-conf-6qqvv before test Nov 6 01:08:03.941: INFO: calico-node-windows-wq7jf from kube-system started at 2022-11-06 01:05:13 +0000 UTC (2 container statuses recorded) Nov 6 01:08:03.941: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 01:08:03.941: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 01:08:03.941: INFO: containerd-logger-4c4v9 from kube-system started at 2022-11-06 01:05:13 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.941: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 01:08:03.941: INFO: csi-proxy-d7klv from kube-system started at 2022-11-06 01:05:43 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.941: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 01:08:03.941: INFO: kube-proxy-windows-mg9dn from kube-system started at 2022-11-06 01:05:13 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.941: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 01:08:03.941: INFO: Logging pods the apiserver thinks is on node capz-conf-ppc2q before test Nov 6 01:08:03.977: INFO: calico-node-windows-hsdvh from kube-system started at 2022-11-06 01:05:08 +0000 UTC (2 container statuses recorded) Nov 6 01:08:03.977: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 01:08:03.977: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 01:08:03.977: INFO: containerd-logger-s25tr from kube-system started at 2022-11-06 01:05:08 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.977: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 01:08:03.977: INFO: csi-proxy-vqp4q from kube-system started at 2022-11-06 01:05:39 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.977: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 01:08:03.977: INFO: kube-proxy-windows-vmt8g from kube-system started at 2022-11-06 01:05:08 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.977: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:461 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/06/22 01:08:03.978�[0m Nov 6 01:08:04.013: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6486" to be "running" Nov 6 01:08:04.047: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 33.518197ms Nov 6 01:08:06.074: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061469517s Nov 6 01:08:08.076: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06318048s Nov 6 01:08:10.076: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062503082s Nov 6 01:08:12.075: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061827466s Nov 6 01:08:14.075: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 10.061940857s Nov 6 01:08:16.075: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 12.061908454s Nov 6 01:08:18.077: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 14.063966593s Nov 6 01:08:20.077: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064074739s Nov 6 01:08:22.075: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 18.062349848s Nov 6 01:08:24.077: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 20.06422308s Nov 6 01:08:24.077: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/06/22 01:08:24.105�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m11/06/22 01:08:24.145�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-83543bf0-08dd-4d86-9d9a-036e0378fa8f 42 �[38;5;243m11/06/22 01:08:24.184�[0m �[1mSTEP:�[0m Trying to relaunch the pod, now with labels. �[38;5;243m11/06/22 01:08:24.212�[0m Nov 6 01:08:24.246: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-6486" to be "not pending" Nov 6 01:08:24.280: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 33.632315ms Nov 6 01:08:26.309: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062442289s Nov 6 01:08:28.309: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06269845s Nov 6 01:08:30.309: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 6.062751019s Nov 6 01:08:30.309: INFO: Pod "with-labels" satisfied condition "not pending" �[1mSTEP:�[0m removing the label kubernetes.io/e2e-83543bf0-08dd-4d86-9d9a-036e0378fa8f off the node capz-conf-6qqvv �[38;5;243m11/06/22 01:08:30.337�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-83543bf0-08dd-4d86-9d9a-036e0378fa8f �[38;5;243m11/06/22 01:08:30.403�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:08:30.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-pred-6486" for this suite. �[38;5;243m11/06/22 01:08:30.464�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [26.918 seconds]�[0m [sig-scheduling] SchedulerPredicates [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates that NodeSelector is respected if matching [Conformance] �[38;5;243mtest/e2e/scheduling/predicates.go:461�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:08:03.577�[0m Nov 6 01:08:03.577: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-pred �[38;5;243m11/06/22 01:08:03.578�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:08:03.675�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:08:03.736�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Nov 6 01:08:03.808: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 6 01:08:03.879: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:08:03.907: INFO: Logging pods the apiserver thinks is on node capz-conf-6qqvv before test Nov 6 01:08:03.941: INFO: calico-node-windows-wq7jf from kube-system started at 2022-11-06 01:05:13 +0000 UTC (2 container statuses recorded) Nov 6 01:08:03.941: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 01:08:03.941: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 01:08:03.941: INFO: containerd-logger-4c4v9 from kube-system started at 2022-11-06 01:05:13 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.941: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 01:08:03.941: INFO: csi-proxy-d7klv from kube-system started at 2022-11-06 01:05:43 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.941: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 01:08:03.941: INFO: kube-proxy-windows-mg9dn from kube-system started at 2022-11-06 01:05:13 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.941: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 01:08:03.941: INFO: Logging pods the apiserver thinks is on node capz-conf-ppc2q before test Nov 6 01:08:03.977: INFO: calico-node-windows-hsdvh from kube-system started at 2022-11-06 01:05:08 +0000 UTC (2 container statuses recorded) Nov 6 01:08:03.977: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 01:08:03.977: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 01:08:03.977: INFO: containerd-logger-s25tr from kube-system started at 2022-11-06 01:05:08 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.977: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 01:08:03.977: INFO: csi-proxy-vqp4q from kube-system started at 2022-11-06 01:05:39 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.977: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 01:08:03.977: INFO: kube-proxy-windows-vmt8g from kube-system started at 2022-11-06 01:05:08 +0000 UTC (1 container statuses recorded) Nov 6 01:08:03.977: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:461 �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/06/22 01:08:03.978�[0m Nov 6 01:08:04.013: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6486" to be "running" Nov 6 01:08:04.047: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 33.518197ms Nov 6 01:08:06.074: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061469517s Nov 6 01:08:08.076: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06318048s Nov 6 01:08:10.076: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062503082s Nov 6 01:08:12.075: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061827466s Nov 6 01:08:14.075: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 10.061940857s Nov 6 01:08:16.075: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 12.061908454s Nov 6 01:08:18.077: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 14.063966593s Nov 6 01:08:20.077: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064074739s Nov 6 01:08:22.075: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 18.062349848s Nov 6 01:08:24.077: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 20.06422308s Nov 6 01:08:24.077: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/06/22 01:08:24.105�[0m �[1mSTEP:�[0m Trying to apply a random label on the found node. �[38;5;243m11/06/22 01:08:24.145�[0m �[1mSTEP:�[0m verifying the node has the label kubernetes.io/e2e-83543bf0-08dd-4d86-9d9a-036e0378fa8f 42 �[38;5;243m11/06/22 01:08:24.184�[0m �[1mSTEP:�[0m Trying to relaunch the pod, now with labels. �[38;5;243m11/06/22 01:08:24.212�[0m Nov 6 01:08:24.246: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-6486" to be "not pending" Nov 6 01:08:24.280: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 33.632315ms Nov 6 01:08:26.309: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062442289s Nov 6 01:08:28.309: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06269845s Nov 6 01:08:30.309: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 6.062751019s Nov 6 01:08:30.309: INFO: Pod "with-labels" satisfied condition "not pending" �[1mSTEP:�[0m removing the label kubernetes.io/e2e-83543bf0-08dd-4d86-9d9a-036e0378fa8f off the node capz-conf-6qqvv �[38;5;243m11/06/22 01:08:30.337�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-83543bf0-08dd-4d86-9d9a-036e0378fa8f �[38;5;243m11/06/22 01:08:30.403�[0m [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:08:30.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-pred-6486" for this suite. �[38;5;243m11/06/22 01:08:30.464�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] Deployment (Container Resource)�[0m �[1mShould scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:61�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:08:30.503�[0m Nov 6 01:08:30.503: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 01:08:30.504�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:08:30.595�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:08:30.649�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:61 Nov 6 01:08:30.703: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 01:08:30.705�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-1943 �[38;5;243m11/06/22 01:08:30.747�[0m I1106 01:08:30.780994 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-1943, replica count: 1 I1106 01:08:40.834474 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1106 01:08:50.834827 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 01:08:50.834�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-1943 �[38;5;243m11/06/22 01:08:50.879�[0m I1106 01:08:50.913655 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-1943, replica count: 1 I1106 01:09:00.964422 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 01:09:05.964: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 6 01:09:05.995: INFO: RC test-deployment: consume 250 millicores in total Nov 6 01:09:05.995: INFO: RC test-deployment: setting consumption to 250 millicores in total Nov 6 01:09:05.995: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 01:09:05.995: INFO: RC test-deployment: consume 0 MB in total Nov 6 01:09:05.995: INFO: RC test-deployment: disabling mem consumption Nov 6 01:09:05.995: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1943/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 01:09:05.995: INFO: RC test-deployment: consume custom metric 0 in total Nov 6 01:09:05.996: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 6 01:09:06.058: INFO: waiting for 3 replicas (current: 1) Nov 6 01:09:26.088: INFO: waiting for 3 replicas (current: 1) Nov 6 01:09:42.068: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 01:09:42.068: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1943/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 01:09:46.086: INFO: waiting for 3 replicas (current: 2) Nov 6 01:10:06.089: INFO: waiting for 3 replicas (current: 3) Nov 6 01:10:06.089: INFO: RC test-deployment: consume 700 millicores in total Nov 6 01:10:06.089: INFO: RC test-deployment: setting consumption to 700 millicores in total Nov 6 01:10:06.117: INFO: waiting for 5 replicas (current: 3) Nov 6 01:10:15.123: INFO: RC test-deployment: sending request to consume 700 millicores Nov 6 01:10:15.123: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1943/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Nov 6 01:10:26.149: INFO: waiting for 5 replicas (current: 3) Nov 6 01:10:46.150: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/06/22 01:10:46.186�[0m Nov 6 01:10:46.186: INFO: RC test-deployment: stopping metric consumer Nov 6 01:10:46.186: INFO: RC test-deployment: stopping CPU consumer Nov 6 01:10:46.186: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-1943, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:10:56.186�[0m Nov 6 01:10:56.298: INFO: Deleting Deployment.apps test-deployment took: 33.238351ms Nov 6 01:10:56.399: INFO: Terminating Deployment.apps test-deployment pods took: 100.978689ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-1943, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:10:58.862�[0m Nov 6 01:10:58.973: INFO: Deleting ReplicationController test-deployment-ctrl took: 31.320367ms Nov 6 01:10:59.073: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.837605ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 6 01:11:01.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1943" for this suite. �[38;5;243m11/06/22 01:11:01.06�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [150.593 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment (Container Resource) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:60�[0m Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:61�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:08:30.503�[0m Nov 6 01:08:30.503: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 01:08:30.504�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:08:30.595�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:08:30.649�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:61 Nov 6 01:08:30.703: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 01:08:30.705�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-1943 �[38;5;243m11/06/22 01:08:30.747�[0m I1106 01:08:30.780994 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-1943, replica count: 1 I1106 01:08:40.834474 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1106 01:08:50.834827 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 01:08:50.834�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-1943 �[38;5;243m11/06/22 01:08:50.879�[0m I1106 01:08:50.913655 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-1943, replica count: 1 I1106 01:09:00.964422 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 01:09:05.964: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 6 01:09:05.995: INFO: RC test-deployment: consume 250 millicores in total Nov 6 01:09:05.995: INFO: RC test-deployment: setting consumption to 250 millicores in total Nov 6 01:09:05.995: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 01:09:05.995: INFO: RC test-deployment: consume 0 MB in total Nov 6 01:09:05.995: INFO: RC test-deployment: disabling mem consumption Nov 6 01:09:05.995: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1943/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 01:09:05.995: INFO: RC test-deployment: consume custom metric 0 in total Nov 6 01:09:05.996: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 6 01:09:06.058: INFO: waiting for 3 replicas (current: 1) Nov 6 01:09:26.088: INFO: waiting for 3 replicas (current: 1) Nov 6 01:09:42.068: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 01:09:42.068: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1943/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 01:09:46.086: INFO: waiting for 3 replicas (current: 2) Nov 6 01:10:06.089: INFO: waiting for 3 replicas (current: 3) Nov 6 01:10:06.089: INFO: RC test-deployment: consume 700 millicores in total Nov 6 01:10:06.089: INFO: RC test-deployment: setting consumption to 700 millicores in total Nov 6 01:10:06.117: INFO: waiting for 5 replicas (current: 3) Nov 6 01:10:15.123: INFO: RC test-deployment: sending request to consume 700 millicores Nov 6 01:10:15.123: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1943/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Nov 6 01:10:26.149: INFO: waiting for 5 replicas (current: 3) Nov 6 01:10:46.150: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/06/22 01:10:46.186�[0m Nov 6 01:10:46.186: INFO: RC test-deployment: stopping metric consumer Nov 6 01:10:46.186: INFO: RC test-deployment: stopping CPU consumer Nov 6 01:10:46.186: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-1943, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:10:56.186�[0m Nov 6 01:10:56.298: INFO: Deleting Deployment.apps test-deployment took: 33.238351ms Nov 6 01:10:56.399: INFO: Terminating Deployment.apps test-deployment pods took: 100.978689ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-1943, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:10:58.862�[0m Nov 6 01:10:58.973: INFO: Deleting ReplicationController test-deployment-ctrl took: 31.320367ms Nov 6 01:10:59.073: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.837605ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 6 01:11:01.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1943" for this suite. �[38;5;243m11/06/22 01:11:01.06�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould cap back-off at MaxContainerBackOff [Slow][NodeConformance]�[0m �[38;5;243mtest/e2e/common/node/pods.go:717�[0m [BeforeEach] [sig-node] Pods set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:11:01.097�[0m Nov 6 01:11:01.097: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m11/06/22 01:11:01.098�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:11:01.192�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:11:01.245�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:717 Nov 6 01:11:01.335: INFO: Waiting up to 5m0s for pod "back-off-cap" in namespace "pods-961" to be "running and ready" Nov 6 01:11:01.364: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 29.335961ms Nov 6 01:11:01.364: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:11:03.394: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05939029s Nov 6 01:11:03.394: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:11:05.393: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058465549s Nov 6 01:11:05.393: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:11:07.393: INFO: Pod "back-off-cap": Phase="Running", Reason="", readiness=true. Elapsed: 6.058611043s Nov 6 01:11:07.394: INFO: The phase of Pod back-off-cap is Running (Ready = true) Nov 6 01:11:07.394: INFO: Pod "back-off-cap" satisfied condition "running and ready" �[1mSTEP:�[0m getting restart delay when capped �[38;5;243m11/06/22 01:21:07.422�[0m Nov 6 01:22:53.399: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-11-06 01:17:37 +0000 UTC restartedAt=2022-11-06 01:22:51 +0000 UTC (5m14s) Nov 6 01:28:09.024: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-11-06 01:22:56 +0000 UTC restartedAt=2022-11-06 01:28:08 +0000 UTC (5m12s) Nov 6 01:33:17.289: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-11-06 01:28:13 +0000 UTC restartedAt=2022-11-06 01:33:16 +0000 UTC (5m3s) �[1mSTEP:�[0m getting restart delay after a capped delay �[38;5;243m11/06/22 01:33:17.289�[0m Nov 6 01:38:34.780: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-11-06 01:33:21 +0000 UTC restartedAt=2022-11-06 01:38:33 +0000 UTC (5m12s) [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 Nov 6 01:38:34.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "pods-961" for this suite. �[38;5;243m11/06/22 01:38:34.821�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [1653.756 seconds]�[0m [sig-node] Pods �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should cap back-off at MaxContainerBackOff [Slow][NodeConformance] �[38;5;243mtest/e2e/common/node/pods.go:717�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Pods set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:11:01.097�[0m Nov 6 01:11:01.097: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m11/06/22 01:11:01.098�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:11:01.192�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:11:01.245�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:717 Nov 6 01:11:01.335: INFO: Waiting up to 5m0s for pod "back-off-cap" in namespace "pods-961" to be "running and ready" Nov 6 01:11:01.364: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 29.335961ms Nov 6 01:11:01.364: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:11:03.394: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05939029s Nov 6 01:11:03.394: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:11:05.393: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058465549s Nov 6 01:11:05.393: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:11:07.393: INFO: Pod "back-off-cap": Phase="Running", Reason="", readiness=true. Elapsed: 6.058611043s Nov 6 01:11:07.394: INFO: The phase of Pod back-off-cap is Running (Ready = true) Nov 6 01:11:07.394: INFO: Pod "back-off-cap" satisfied condition "running and ready" �[1mSTEP:�[0m getting restart delay when capped �[38;5;243m11/06/22 01:21:07.422�[0m Nov 6 01:22:53.399: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-11-06 01:17:37 +0000 UTC restartedAt=2022-11-06 01:22:51 +0000 UTC (5m14s) Nov 6 01:28:09.024: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-11-06 01:22:56 +0000 UTC restartedAt=2022-11-06 01:28:08 +0000 UTC (5m12s) Nov 6 01:33:17.289: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-11-06 01:28:13 +0000 UTC restartedAt=2022-11-06 01:33:16 +0000 UTC (5m3s) �[1mSTEP:�[0m getting restart delay after a capped delay �[38;5;243m11/06/22 01:33:17.289�[0m Nov 6 01:38:34.780: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-11-06 01:33:21 +0000 UTC restartedAt=2022-11-06 01:38:33 +0000 UTC (5m12s) [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 Nov 6 01:38:34.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "pods-961" for this suite. �[38;5;243m11/06/22 01:38:34.821�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mPodTopologySpread Preemption�[0m �[1mvalidates proper pods are preempted�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:355�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:38:34.854�[0m Nov 6 01:38:34.854: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/06/22 01:38:34.855�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:38:34.943�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:38:34.998�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Nov 6 01:38:35.147: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 01:39:35.462: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 �[1mSTEP:�[0m Trying to get 2 available nodes which can run pod �[38;5;243m11/06/22 01:39:35.49�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/06/22 01:39:35.49�[0m Nov 6 01:39:35.534: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-138" to be "running" Nov 6 01:39:35.562: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 27.858506ms Nov 6 01:39:37.593: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058991276s Nov 6 01:39:39.591: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.056600099s Nov 6 01:39:39.591: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/06/22 01:39:39.619�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/06/22 01:39:39.661�[0m Nov 6 01:39:39.696: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-138" to be "running" Nov 6 01:39:39.725: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 29.277699ms Nov 6 01:39:41.754: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058419997s Nov 6 01:39:43.755: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058643604s Nov 6 01:39:45.754: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057915738s Nov 6 01:39:47.755: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058928752s Nov 6 01:39:49.755: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059073344s Nov 6 01:39:51.755: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 12.058937335s Nov 6 01:39:53.756: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 14.05964568s Nov 6 01:39:55.754: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 16.058339281s Nov 6 01:39:57.754: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 18.058244517s Nov 6 01:39:59.754: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 20.058231914s Nov 6 01:40:01.756: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 22.059703265s Nov 6 01:40:01.756: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/06/22 01:40:01.784�[0m �[1mSTEP:�[0m Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. �[38;5;243m11/06/22 01:40:01.832�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-6qqvv. �[38;5;243m11/06/22 01:40:01.902�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-ppc2q. �[38;5;243m11/06/22 01:40:02.018�[0m [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 �[1mSTEP:�[0m Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. �[38;5;243m11/06/22 01:40:02.065�[0m Nov 6 01:40:02.098: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-138" to be "running" Nov 6 01:40:02.126: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 28.083862ms Nov 6 01:40:04.156: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058676678s Nov 6 01:40:06.155: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057220237s Nov 6 01:40:08.156: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058466118s Nov 6 01:40:10.158: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 8.060315748s Nov 6 01:40:10.158: INFO: Pod "high" satisfied condition "running" Nov 6 01:40:10.224: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-138" to be "running" Nov 6 01:40:10.253: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.255979ms Nov 6 01:40:12.282: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057159374s Nov 6 01:40:14.284: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059578632s Nov 6 01:40:16.282: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 6.057221531s Nov 6 01:40:16.282: INFO: Pod "low-1" satisfied condition "running" Nov 6 01:40:16.346: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-138" to be "running" Nov 6 01:40:16.375: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.54345ms Nov 6 01:40:18.404: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057590235s Nov 6 01:40:20.405: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 4.059265201s Nov 6 01:40:20.405: INFO: Pod "low-2" satisfied condition "running" Nov 6 01:40:20.467: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-138" to be "running" Nov 6 01:40:20.498: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 31.367419ms Nov 6 01:40:22.526: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059648653s Nov 6 01:40:24.528: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 4.060770668s Nov 6 01:40:24.528: INFO: Pod "low-3" satisfied condition "running" �[1mSTEP:�[0m Create 1 Medium Pod with TopologySpreadConstraints �[38;5;243m11/06/22 01:40:24.556�[0m Nov 6 01:40:24.588: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-138" to be "running" Nov 6 01:40:24.624: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 36.236569ms Nov 6 01:40:26.653: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065387776s Nov 6 01:40:28.654: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066129148s Nov 6 01:40:30.654: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 6.066562288s Nov 6 01:40:30.654: INFO: Pod "medium" satisfied condition "running" �[1mSTEP:�[0m Verify there are 3 Pods left in this namespace �[38;5;243m11/06/22 01:40:30.682�[0m �[1mSTEP:�[0m Pod "high" is as expected to be running. �[38;5;243m11/06/22 01:40:30.712�[0m �[1mSTEP:�[0m Pod "low-1" is as expected to be running. �[38;5;243m11/06/22 01:40:30.712�[0m �[1mSTEP:�[0m Pod "medium" is as expected to be running. �[38;5;243m11/06/22 01:40:30.712�[0m [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-6qqvv �[38;5;243m11/06/22 01:40:30.712�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m11/06/22 01:40:30.778�[0m �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-ppc2q �[38;5;243m11/06/22 01:40:30.807�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m11/06/22 01:40:30.878�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:40:30.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-138" for this suite. �[38;5;243m11/06/22 01:40:31.169�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [116.347 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m PodTopologySpread Preemption �[38;5;243mtest/e2e/scheduling/preemption.go:316�[0m validates proper pods are preempted �[38;5;243mtest/e2e/scheduling/preemption.go:355�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:38:34.854�[0m Nov 6 01:38:34.854: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/06/22 01:38:34.855�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:38:34.943�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:38:34.998�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Nov 6 01:38:35.147: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 01:39:35.462: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 �[1mSTEP:�[0m Trying to get 2 available nodes which can run pod �[38;5;243m11/06/22 01:39:35.49�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/06/22 01:39:35.49�[0m Nov 6 01:39:35.534: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-138" to be "running" Nov 6 01:39:35.562: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 27.858506ms Nov 6 01:39:37.593: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058991276s Nov 6 01:39:39.591: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.056600099s Nov 6 01:39:39.591: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/06/22 01:39:39.619�[0m �[1mSTEP:�[0m Trying to launch a pod without a label to get a node which can launch it. �[38;5;243m11/06/22 01:39:39.661�[0m Nov 6 01:39:39.696: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-138" to be "running" Nov 6 01:39:39.725: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 29.277699ms Nov 6 01:39:41.754: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058419997s Nov 6 01:39:43.755: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058643604s Nov 6 01:39:45.754: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057915738s Nov 6 01:39:47.755: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058928752s Nov 6 01:39:49.755: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059073344s Nov 6 01:39:51.755: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 12.058937335s Nov 6 01:39:53.756: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 14.05964568s Nov 6 01:39:55.754: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 16.058339281s Nov 6 01:39:57.754: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 18.058244517s Nov 6 01:39:59.754: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 20.058231914s Nov 6 01:40:01.756: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 22.059703265s Nov 6 01:40:01.756: INFO: Pod "without-label" satisfied condition "running" �[1mSTEP:�[0m Explicitly delete pod here to free the resource it takes. �[38;5;243m11/06/22 01:40:01.784�[0m �[1mSTEP:�[0m Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. �[38;5;243m11/06/22 01:40:01.832�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-6qqvv. �[38;5;243m11/06/22 01:40:01.902�[0m �[1mSTEP:�[0m Apply 10 fake resource to node capz-conf-ppc2q. �[38;5;243m11/06/22 01:40:02.018�[0m [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 �[1mSTEP:�[0m Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. �[38;5;243m11/06/22 01:40:02.065�[0m Nov 6 01:40:02.098: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-138" to be "running" Nov 6 01:40:02.126: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 28.083862ms Nov 6 01:40:04.156: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058676678s Nov 6 01:40:06.155: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057220237s Nov 6 01:40:08.156: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058466118s Nov 6 01:40:10.158: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 8.060315748s Nov 6 01:40:10.158: INFO: Pod "high" satisfied condition "running" Nov 6 01:40:10.224: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-138" to be "running" Nov 6 01:40:10.253: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.255979ms Nov 6 01:40:12.282: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057159374s Nov 6 01:40:14.284: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059578632s Nov 6 01:40:16.282: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 6.057221531s Nov 6 01:40:16.282: INFO: Pod "low-1" satisfied condition "running" Nov 6 01:40:16.346: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-138" to be "running" Nov 6 01:40:16.375: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.54345ms Nov 6 01:40:18.404: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057590235s Nov 6 01:40:20.405: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 4.059265201s Nov 6 01:40:20.405: INFO: Pod "low-2" satisfied condition "running" Nov 6 01:40:20.467: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-138" to be "running" Nov 6 01:40:20.498: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 31.367419ms Nov 6 01:40:22.526: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059648653s Nov 6 01:40:24.528: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 4.060770668s Nov 6 01:40:24.528: INFO: Pod "low-3" satisfied condition "running" �[1mSTEP:�[0m Create 1 Medium Pod with TopologySpreadConstraints �[38;5;243m11/06/22 01:40:24.556�[0m Nov 6 01:40:24.588: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-138" to be "running" Nov 6 01:40:24.624: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 36.236569ms Nov 6 01:40:26.653: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065387776s Nov 6 01:40:28.654: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066129148s Nov 6 01:40:30.654: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 6.066562288s Nov 6 01:40:30.654: INFO: Pod "medium" satisfied condition "running" �[1mSTEP:�[0m Verify there are 3 Pods left in this namespace �[38;5;243m11/06/22 01:40:30.682�[0m �[1mSTEP:�[0m Pod "high" is as expected to be running. �[38;5;243m11/06/22 01:40:30.712�[0m �[1mSTEP:�[0m Pod "low-1" is as expected to be running. �[38;5;243m11/06/22 01:40:30.712�[0m �[1mSTEP:�[0m Pod "medium" is as expected to be running. �[38;5;243m11/06/22 01:40:30.712�[0m [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-6qqvv �[38;5;243m11/06/22 01:40:30.712�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m11/06/22 01:40:30.778�[0m �[1mSTEP:�[0m removing the label kubernetes.io/e2e-pts-preemption off the node capz-conf-ppc2q �[38;5;243m11/06/22 01:40:30.807�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption �[38;5;243m11/06/22 01:40:30.878�[0m [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:40:30.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-138" for this suite. �[38;5;243m11/06/22 01:40:31.169�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould not schedule new jobs when ForbidConcurrent [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/apps/cronjob.go:124�[0m [BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:40:31.212�[0m Nov 6 01:40:31.212: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename cronjob �[38;5;243m11/06/22 01:40:31.213�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:40:31.3�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:40:31.354�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] test/e2e/apps/cronjob.go:124 �[1mSTEP:�[0m Creating a ForbidConcurrent cronjob �[38;5;243m11/06/22 01:40:31.408�[0m �[1mSTEP:�[0m Ensuring a job is scheduled �[38;5;243m11/06/22 01:40:31.447�[0m �[1mSTEP:�[0m Ensuring exactly one is scheduled �[38;5;243m11/06/22 01:41:01.475�[0m �[1mSTEP:�[0m Ensuring exactly one running job exists by listing jobs explicitly �[38;5;243m11/06/22 01:41:01.503�[0m �[1mSTEP:�[0m Ensuring no more jobs are scheduled �[38;5;243m11/06/22 01:41:01.532�[0m �[1mSTEP:�[0m Removing cronjob �[38;5;243m11/06/22 01:46:01.589�[0m [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 6 01:46:01.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "cronjob-8772" for this suite. �[38;5;243m11/06/22 01:46:01.658�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [330.484 seconds]�[0m [sig-apps] CronJob �[38;5;243mtest/e2e/apps/framework.go:23�[0m should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] �[38;5;243mtest/e2e/apps/cronjob.go:124�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:40:31.212�[0m Nov 6 01:40:31.212: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename cronjob �[38;5;243m11/06/22 01:40:31.213�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:40:31.3�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:40:31.354�[0m [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] test/e2e/apps/cronjob.go:124 �[1mSTEP:�[0m Creating a ForbidConcurrent cronjob �[38;5;243m11/06/22 01:40:31.408�[0m �[1mSTEP:�[0m Ensuring a job is scheduled �[38;5;243m11/06/22 01:40:31.447�[0m �[1mSTEP:�[0m Ensuring exactly one is scheduled �[38;5;243m11/06/22 01:41:01.475�[0m �[1mSTEP:�[0m Ensuring exactly one running job exists by listing jobs explicitly �[38;5;243m11/06/22 01:41:01.503�[0m �[1mSTEP:�[0m Ensuring no more jobs are scheduled �[38;5;243m11/06/22 01:41:01.532�[0m �[1mSTEP:�[0m Removing cronjob �[38;5;243m11/06/22 01:46:01.589�[0m [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 6 01:46:01.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "cronjob-8772" for this suite. �[38;5;243m11/06/22 01:46:01.658�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mPriorityClass endpoints�[0m �[1mverify PriorityClass endpoints can be operated with different HTTP methods [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:733�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:46:01.702�[0m Nov 6 01:46:01.703: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/06/22 01:46:01.704�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:46:01.797�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:46:01.854�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Nov 6 01:46:02.004: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 01:47:02.261: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:02.289�[0m Nov 6 01:47:02.289: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption-path �[38;5;243m11/06/22 01:47:02.29�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:02.379�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:02.434�[0m [BeforeEach] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:733 Nov 6 01:47:02.580: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Nov 6 01:47:02.608: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/node/init/init.go:32 Nov 6 01:47:02.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:47:02.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [DeferCleanup (Each)] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PriorityClass endpoints dump namespaces | framework.go:196 [DeferCleanup (Each)] PriorityClass endpoints tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-path-703" for this suite. �[38;5;243m11/06/22 01:47:03.029�[0m [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-4783" for this suite. �[38;5;243m11/06/22 01:47:03.062�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [61.391 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m PriorityClass endpoints �[38;5;243mtest/e2e/scheduling/preemption.go:683�[0m verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:733�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:46:01.702�[0m Nov 6 01:46:01.703: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/06/22 01:46:01.704�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:46:01.797�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:46:01.854�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Nov 6 01:46:02.004: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 01:47:02.261: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:02.289�[0m Nov 6 01:47:02.289: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption-path �[38;5;243m11/06/22 01:47:02.29�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:02.379�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:02.434�[0m [BeforeEach] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:733 Nov 6 01:47:02.580: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Nov 6 01:47:02.608: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/node/init/init.go:32 Nov 6 01:47:02.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:47:02.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [DeferCleanup (Each)] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PriorityClass endpoints dump namespaces | framework.go:196 [DeferCleanup (Each)] PriorityClass endpoints tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-path-703" for this suite. �[38;5;243m11/06/22 01:47:03.029�[0m [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-4783" for this suite. �[38;5;243m11/06/22 01:47:03.062�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould verify changes to a daemon set status [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:862�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:03.095�[0m Nov 6 01:47:03.096: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/06/22 01:47:03.097�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:03.19�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:03.244�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:862 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m11/06/22 01:47:03.454�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/06/22 01:47:03.491�[0m Nov 6 01:47:03.531: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:03.566: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:03.566: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:04.597: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:04.651: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:04.651: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:05.602: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:05.632: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:05.632: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:06.597: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:06.627: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:06.628: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:07.597: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:07.626: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:07.627: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:08.598: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:08.627: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 01:47:08.627: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Getting /status �[38;5;243m11/06/22 01:47:08.655�[0m Nov 6 01:47:08.684: INFO: Daemon Set daemon-set has Conditions: [] �[1mSTEP:�[0m updating the DaemonSet Status �[38;5;243m11/06/22 01:47:08.684�[0m Nov 6 01:47:08.745: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP:�[0m watching for the daemon set status to be updated �[38;5;243m11/06/22 01:47:08.745�[0m Nov 6 01:47:08.773: INFO: Observed &DaemonSet event: ADDED Nov 6 01:47:08.773: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.773: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.773: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.773: INFO: Found daemon set daemon-set in namespace daemonsets-9122 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Nov 6 01:47:08.773: INFO: Daemon set daemon-set has an updated status �[1mSTEP:�[0m patching the DaemonSet Status �[38;5;243m11/06/22 01:47:08.774�[0m �[1mSTEP:�[0m watching for the daemon set status to be patched �[38;5;243m11/06/22 01:47:08.809�[0m Nov 6 01:47:08.837: INFO: Observed &DaemonSet event: ADDED Nov 6 01:47:08.837: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.837: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.838: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.838: INFO: Observed daemon set daemon-set in namespace daemonsets-9122 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Nov 6 01:47:08.838: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.838: INFO: Found daemon set daemon-set in namespace daemonsets-9122 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Nov 6 01:47:08.838: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m11/06/22 01:47:08.866�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-9122, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:47:08.867�[0m Nov 6 01:47:08.978: INFO: Deleting DaemonSet.extensions daemon-set took: 32.4727ms Nov 6 01:47:09.079: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.852455ms Nov 6 01:47:14.208: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:14.208: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Nov 6 01:47:14.237: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"5666"},"items":null} Nov 6 01:47:14.267: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"5666"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:47:14.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-9122" for this suite. �[38;5;243m11/06/22 01:47:14.394�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [11.334 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should verify changes to a daemon set status [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:862�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:03.095�[0m Nov 6 01:47:03.096: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/06/22 01:47:03.097�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:03.19�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:03.244�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:862 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m11/06/22 01:47:03.454�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/06/22 01:47:03.491�[0m Nov 6 01:47:03.531: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:03.566: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:03.566: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:04.597: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:04.651: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:04.651: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:05.602: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:05.632: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:05.632: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:06.597: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:06.627: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:06.628: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:07.597: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:07.626: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:07.627: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:08.598: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:08.627: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 01:47:08.627: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Getting /status �[38;5;243m11/06/22 01:47:08.655�[0m Nov 6 01:47:08.684: INFO: Daemon Set daemon-set has Conditions: [] �[1mSTEP:�[0m updating the DaemonSet Status �[38;5;243m11/06/22 01:47:08.684�[0m Nov 6 01:47:08.745: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP:�[0m watching for the daemon set status to be updated �[38;5;243m11/06/22 01:47:08.745�[0m Nov 6 01:47:08.773: INFO: Observed &DaemonSet event: ADDED Nov 6 01:47:08.773: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.773: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.773: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.773: INFO: Found daemon set daemon-set in namespace daemonsets-9122 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Nov 6 01:47:08.773: INFO: Daemon set daemon-set has an updated status �[1mSTEP:�[0m patching the DaemonSet Status �[38;5;243m11/06/22 01:47:08.774�[0m �[1mSTEP:�[0m watching for the daemon set status to be patched �[38;5;243m11/06/22 01:47:08.809�[0m Nov 6 01:47:08.837: INFO: Observed &DaemonSet event: ADDED Nov 6 01:47:08.837: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.837: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.838: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.838: INFO: Observed daemon set daemon-set in namespace daemonsets-9122 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Nov 6 01:47:08.838: INFO: Observed &DaemonSet event: MODIFIED Nov 6 01:47:08.838: INFO: Found daemon set daemon-set in namespace daemonsets-9122 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Nov 6 01:47:08.838: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m11/06/22 01:47:08.866�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-9122, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:47:08.867�[0m Nov 6 01:47:08.978: INFO: Deleting DaemonSet.extensions daemon-set took: 32.4727ms Nov 6 01:47:09.079: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.852455ms Nov 6 01:47:14.208: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:14.208: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Nov 6 01:47:14.237: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"5666"},"items":null} Nov 6 01:47:14.267: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"5666"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:47:14.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-9122" for this suite. �[38;5;243m11/06/22 01:47:14.394�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould retry creating failed daemon pods [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:294�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:14.439�[0m Nov 6 01:47:14.439: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/06/22 01:47:14.44�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:14.532�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:14.586�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 �[1mSTEP:�[0m Creating a simple DaemonSet "daemon-set" �[38;5;243m11/06/22 01:47:14.766�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/06/22 01:47:14.799�[0m Nov 6 01:47:14.836: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:14.871: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:14.871: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:15.902: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:15.931: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:15.931: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:16.903: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:16.932: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:16.933: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:17.902: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:17.931: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:17.931: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:18.904: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:18.958: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:18.958: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:19.902: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:19.931: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 01:47:19.931: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. �[38;5;243m11/06/22 01:47:19.96�[0m Nov 6 01:47:20.070: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:20.114: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:20.114: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:21.145: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:21.174: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:21.174: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:22.145: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:22.176: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:22.176: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:23.146: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:23.174: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:23.175: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:24.148: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:24.178: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:24.178: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:25.146: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:25.175: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:25.175: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:26.146: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:26.175: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 01:47:26.175: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Wait for the failed daemon pod to be completely deleted. �[38;5;243m11/06/22 01:47:26.175�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m11/06/22 01:47:26.231�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-4779, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:47:26.231�[0m Nov 6 01:47:26.343: INFO: Deleting DaemonSet.extensions daemon-set took: 32.488896ms Nov 6 01:47:26.444: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.832386ms Nov 6 01:47:31.372: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:31.372: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Nov 6 01:47:31.401: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"5787"},"items":null} Nov 6 01:47:31.429: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"5787"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:47:31.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-4779" for this suite. �[38;5;243m11/06/22 01:47:31.549�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [17.146 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should retry creating failed daemon pods [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:294�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:14.439�[0m Nov 6 01:47:14.439: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/06/22 01:47:14.44�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:14.532�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:14.586�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 �[1mSTEP:�[0m Creating a simple DaemonSet "daemon-set" �[38;5;243m11/06/22 01:47:14.766�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/06/22 01:47:14.799�[0m Nov 6 01:47:14.836: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:14.871: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:14.871: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:15.902: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:15.931: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:15.931: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:16.903: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:16.932: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:16.933: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:17.902: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:17.931: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:17.931: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:18.904: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:18.958: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:18.958: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:19.902: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:19.931: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 01:47:19.931: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. �[38;5;243m11/06/22 01:47:19.96�[0m Nov 6 01:47:20.070: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:20.114: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:20.114: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:21.145: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:21.174: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:21.174: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:22.145: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:22.176: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:22.176: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:23.146: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:23.174: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:23.175: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:24.148: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:24.178: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:24.178: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:25.146: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:25.175: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 01:47:25.175: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:47:26.146: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:47:26.175: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 01:47:26.175: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Wait for the failed daemon pod to be completely deleted. �[38;5;243m11/06/22 01:47:26.175�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m11/06/22 01:47:26.231�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-4779, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:47:26.231�[0m Nov 6 01:47:26.343: INFO: Deleting DaemonSet.extensions daemon-set took: 32.488896ms Nov 6 01:47:26.444: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.832386ms Nov 6 01:47:31.372: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:47:31.372: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Nov 6 01:47:31.401: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"5787"},"items":null} Nov 6 01:47:31.429: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"5787"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:47:31.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-4779" for this suite. �[38;5;243m11/06/22 01:47:31.549�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[38;5;243mGMSA support�[0m �[1mworks end to end�[0m �[38;5;243mtest/e2e/windows/gmsa_full.go:98�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:31.585�[0m Nov 6 01:47:31.585: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-full-test-windows �[38;5;243m11/06/22 01:47:31.587�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:31.675�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:31.729�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [It] works end to end test/e2e/windows/gmsa_full.go:98 �[1mSTEP:�[0m finding the worker node that fulfills this test's assumptions �[38;5;243m11/06/22 01:47:31.785�[0m Nov 6 01:47:31.813: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 6 01:47:31.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gmsa-full-test-windows-7769" for this suite. �[38;5;243m11/06/22 01:47:31.846�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS [SKIPPED] [0.292 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m GMSA support �[38;5;243mtest/e2e/windows/gmsa_full.go:97�[0m �[38;5;14m�[1m[It] works end to end�[0m �[38;5;243mtest/e2e/windows/gmsa_full.go:98�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:31.585�[0m Nov 6 01:47:31.585: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-full-test-windows �[38;5;243m11/06/22 01:47:31.587�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:31.675�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:31.729�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [It] works end to end test/e2e/windows/gmsa_full.go:98 �[1mSTEP:�[0m finding the worker node that fulfills this test's assumptions �[38;5;243m11/06/22 01:47:31.785�[0m Nov 6 01:47:31.813: INFO: Expected to find exactly one node with the "agentpool=windowsgmsa" label, found 0 [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 6 01:47:31.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gmsa-full-test-windows-7769" for this suite. �[38;5;243m11/06/22 01:47:31.846�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;14mExpected to find exactly one node with the "agentpool=windowsgmsa" label, found 0�[0m �[38;5;14mIn �[1m[It]�[0m�[38;5;14m at: �[1mtest/e2e/windows/gmsa_full.go:104�[0m �[38;5;14mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/windows.glob..func5.1.1() test/e2e/windows/gmsa_full.go:104 +0x605 �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould not be blocked by dependency circle [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:849�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:31.883�[0m Nov 6 01:47:31.883: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/06/22 01:47:31.884�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:31.979�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:32.034�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should not be blocked by dependency circle [Conformance] test/e2e/apimachinery/garbage_collector.go:849 Nov 6 01:47:32.230: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9211f6c5-20a3-41cc-9d39-c2362997221b", Controller:(*bool)(0xc00438387a), BlockOwnerDeletion:(*bool)(0xc00438387b)}} Nov 6 01:47:32.264: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"11a769c1-2bf4-4b0a-8675-5e86f71166a5", Controller:(*bool)(0xc0048d4e5e), BlockOwnerDeletion:(*bool)(0xc0048d4e5f)}} Nov 6 01:47:32.301: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"dcd0941c-dd0e-414e-bc8f-a8f1acc9c953", Controller:(*bool)(0xc0048d510e), BlockOwnerDeletion:(*bool)(0xc0048d510f)}} [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 6 01:47:37.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-8108" for this suite. �[38;5;243m11/06/22 01:47:37.396�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [5.544 seconds]�[0m [sig-api-machinery] Garbage collector �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should not be blocked by dependency circle [Conformance] �[38;5;243mtest/e2e/apimachinery/garbage_collector.go:849�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:31.883�[0m Nov 6 01:47:31.883: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m11/06/22 01:47:31.884�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:31.979�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:32.034�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should not be blocked by dependency circle [Conformance] test/e2e/apimachinery/garbage_collector.go:849 Nov 6 01:47:32.230: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9211f6c5-20a3-41cc-9d39-c2362997221b", Controller:(*bool)(0xc00438387a), BlockOwnerDeletion:(*bool)(0xc00438387b)}} Nov 6 01:47:32.264: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"11a769c1-2bf4-4b0a-8675-5e86f71166a5", Controller:(*bool)(0xc0048d4e5e), BlockOwnerDeletion:(*bool)(0xc0048d4e5f)}} Nov 6 01:47:32.301: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"dcd0941c-dd0e-414e-bc8f-a8f1acc9c953", Controller:(*bool)(0xc0048d510e), BlockOwnerDeletion:(*bool)(0xc0048d510f)}} [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 6 01:47:37.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-8108" for this suite. �[38;5;243m11/06/22 01:47:37.396�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith autoscaling disabled�[0m �[1mshouldn't scale up�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:138�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:37.431�[0m Nov 6 01:47:37.432: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 01:47:37.433�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:37.528�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:37.583�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] shouldn't scale up test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:138 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/06/22 01:47:37.638�[0m Nov 6 01:47:37.638: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 01:47:37.639�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-2767 �[38;5;243m11/06/22 01:47:37.681�[0m I1106 01:47:37.713569 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-2767, replica count: 1 I1106 01:47:47.766547 14 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 01:47:47.766�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-2767 �[38;5;243m11/06/22 01:47:47.807�[0m I1106 01:47:47.840925 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-2767, replica count: 1 I1106 01:47:57.893139 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 01:48:02.894: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 6 01:48:02.922: INFO: RC consumer: consume 110 millicores in total Nov 6 01:48:02.922: INFO: RC consumer: setting consumption to 110 millicores in total Nov 6 01:48:02.922: INFO: RC consumer: sending request to consume 110 millicores Nov 6 01:48:02.922: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 01:48:02.922: INFO: RC consumer: consume 0 MB in total Nov 6 01:48:02.923: INFO: RC consumer: disabling mem consumption Nov 6 01:48:02.923: INFO: RC consumer: consume custom metric 0 in total Nov 6 01:48:02.923: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m trying to trigger scale up �[38;5;243m11/06/22 01:48:02.955�[0m Nov 6 01:48:02.955: INFO: RC consumer: consume 880 millicores in total Nov 6 01:48:02.977: INFO: RC consumer: setting consumption to 880 millicores in total Nov 6 01:48:03.007: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:03.035: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 6 01:48:13.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:13.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 6 01:48:23.066: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:23.094: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1c9d0} Nov 6 01:48:32.978: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:48:32.978: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:48:33.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:33.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004d108a0} Nov 6 01:48:43.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:43.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004d10980} Nov 6 01:48:53.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:53.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1d080} Nov 6 01:49:03.035: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:49:03.035: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:49:03.070: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:03.097: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1d1e0} Nov 6 01:49:13.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:13.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94750} Nov 6 01:49:23.066: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:23.094: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94830} Nov 6 01:49:33.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:33.088: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:49:33.088: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:49:33.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004d11070} Nov 6 01:49:43.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:43.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94c90} Nov 6 01:49:53.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:53.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004382250} Nov 6 01:50:03.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:03.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0043824e0} Nov 6 01:50:03.140: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:50:03.140: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:50:13.067: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:13.095: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1c1b0} Nov 6 01:50:23.066: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:23.096: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94510} Nov 6 01:50:33.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:33.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94820} Nov 6 01:50:33.194: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:50:33.194: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:50:43.066: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:43.094: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1c4e0} Nov 6 01:50:53.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:53.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1c800} Nov 6 01:51:03.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:51:03.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1caa0} Nov 6 01:51:03.241: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:51:03.241: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:51:13.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:51:13.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94b60} Nov 6 01:51:23.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:51:23.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004382be0} Nov 6 01:51:33.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:51:33.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94e90} Nov 6 01:51:33.121: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:51:33.149: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1cf20} Nov 6 01:51:33.149: INFO: Number of replicas was stable over 3m30s �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m11/06/22 01:51:33.149�[0m Nov 6 01:51:33.150: INFO: time waited for scale up: 3m30.171895931s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m11/06/22 01:51:33.15�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/06/22 01:51:33.21�[0m Nov 6 01:51:33.211: INFO: RC consumer: stopping metric consumer Nov 6 01:51:33.211: INFO: RC consumer: stopping mem consumer Nov 6 01:51:33.211: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-2767, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:51:43.211�[0m Nov 6 01:51:43.322: INFO: Deleting Deployment.apps consumer took: 32.050885ms Nov 6 01:51:43.423: INFO: Terminating Deployment.apps consumer pods took: 101.077367ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-2767, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:51:45.28�[0m Nov 6 01:51:45.391: INFO: Deleting ReplicationController consumer-ctrl took: 32.187497ms Nov 6 01:51:45.492: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.211475ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 6 01:51:47.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2767" for this suite. �[38;5;243m11/06/22 01:51:47.375�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [249.993 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with autoscaling disabled �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137�[0m shouldn't scale up �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:138�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:47:37.431�[0m Nov 6 01:47:37.432: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 01:47:37.433�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:47:37.528�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:47:37.583�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] shouldn't scale up test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:138 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/06/22 01:47:37.638�[0m Nov 6 01:47:37.638: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 01:47:37.639�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-2767 �[38;5;243m11/06/22 01:47:37.681�[0m I1106 01:47:37.713569 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-2767, replica count: 1 I1106 01:47:47.766547 14 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 01:47:47.766�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-2767 �[38;5;243m11/06/22 01:47:47.807�[0m I1106 01:47:47.840925 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-2767, replica count: 1 I1106 01:47:57.893139 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 01:48:02.894: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 6 01:48:02.922: INFO: RC consumer: consume 110 millicores in total Nov 6 01:48:02.922: INFO: RC consumer: setting consumption to 110 millicores in total Nov 6 01:48:02.922: INFO: RC consumer: sending request to consume 110 millicores Nov 6 01:48:02.922: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 01:48:02.922: INFO: RC consumer: consume 0 MB in total Nov 6 01:48:02.923: INFO: RC consumer: disabling mem consumption Nov 6 01:48:02.923: INFO: RC consumer: consume custom metric 0 in total Nov 6 01:48:02.923: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m trying to trigger scale up �[38;5;243m11/06/22 01:48:02.955�[0m Nov 6 01:48:02.955: INFO: RC consumer: consume 880 millicores in total Nov 6 01:48:02.977: INFO: RC consumer: setting consumption to 880 millicores in total Nov 6 01:48:03.007: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:03.035: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 6 01:48:13.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:13.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 6 01:48:23.066: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:23.094: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1c9d0} Nov 6 01:48:32.978: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:48:32.978: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:48:33.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:33.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004d108a0} Nov 6 01:48:43.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:43.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004d10980} Nov 6 01:48:53.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:48:53.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1d080} Nov 6 01:49:03.035: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:49:03.035: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:49:03.070: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:03.097: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1d1e0} Nov 6 01:49:13.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:13.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94750} Nov 6 01:49:23.066: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:23.094: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94830} Nov 6 01:49:33.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:33.088: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:49:33.088: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:49:33.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004d11070} Nov 6 01:49:43.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:43.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94c90} Nov 6 01:49:53.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:49:53.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004382250} Nov 6 01:50:03.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:03.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc0043824e0} Nov 6 01:50:03.140: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:50:03.140: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:50:13.067: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:13.095: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1c1b0} Nov 6 01:50:23.066: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:23.096: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94510} Nov 6 01:50:33.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:33.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94820} Nov 6 01:50:33.194: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:50:33.194: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:50:43.066: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:43.094: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1c4e0} Nov 6 01:50:53.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:50:53.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1c800} Nov 6 01:51:03.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:51:03.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1caa0} Nov 6 01:51:03.241: INFO: RC consumer: sending request to consume 880 millicores Nov 6 01:51:03.241: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2767/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=880&requestSizeMillicores=100 } Nov 6 01:51:13.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:51:13.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94b60} Nov 6 01:51:23.065: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:51:23.093: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004382be0} Nov 6 01:51:33.064: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:51:33.092: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc004b94e90} Nov 6 01:51:33.121: INFO: expecting there to be in [1, 1] replicas (are: 1) Nov 6 01:51:33.149: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:1 DesiredReplicas:1 CurrentCPUUtilizationPercentage:0xc003f1cf20} Nov 6 01:51:33.149: INFO: Number of replicas was stable over 3m30s �[1mSTEP:�[0m verifying time waited for a scale up �[38;5;243m11/06/22 01:51:33.149�[0m Nov 6 01:51:33.150: INFO: time waited for scale up: 3m30.171895931s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m11/06/22 01:51:33.15�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/06/22 01:51:33.21�[0m Nov 6 01:51:33.211: INFO: RC consumer: stopping metric consumer Nov 6 01:51:33.211: INFO: RC consumer: stopping mem consumer Nov 6 01:51:33.211: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-2767, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:51:43.211�[0m Nov 6 01:51:43.322: INFO: Deleting Deployment.apps consumer took: 32.050885ms Nov 6 01:51:43.423: INFO: Terminating Deployment.apps consumer pods took: 101.077367ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-2767, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 01:51:45.28�[0m Nov 6 01:51:45.391: INFO: Deleting ReplicationController consumer-ctrl took: 32.187497ms Nov 6 01:51:45.492: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.211475ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 6 01:51:47.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2767" for this suite. �[38;5;243m11/06/22 01:51:47.375�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould apply a finalizer to a Namespace [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/namespace.go:394�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:51:47.432�[0m Nov 6 01:51:47.432: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m11/06/22 01:51:47.433�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:51:47.526�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:51:47.58�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply a finalizer to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:394 �[1mSTEP:�[0m Creating namespace "e2e-ns-m8zs5" �[38;5;243m11/06/22 01:51:47.635�[0m Nov 6 01:51:47.722: INFO: Namespace "e2e-ns-m8zs5-4521" has []v1.FinalizerName{"kubernetes"} �[1mSTEP:�[0m Adding e2e finalizer to namespace "e2e-ns-m8zs5-4521" �[38;5;243m11/06/22 01:51:47.722�[0m Nov 6 01:51:47.781: INFO: Namespace "e2e-ns-m8zs5-4521" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} �[1mSTEP:�[0m Removing e2e finalizer from namespace "e2e-ns-m8zs5-4521" �[38;5;243m11/06/22 01:51:47.781�[0m Nov 6 01:51:47.841: INFO: Namespace "e2e-ns-m8zs5-4521" has []v1.FinalizerName{"kubernetes"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:51:47.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "namespaces-128" for this suite. �[38;5;243m11/06/22 01:51:47.873�[0m �[1mSTEP:�[0m Destroying namespace "e2e-ns-m8zs5-4521" for this suite. �[38;5;243m11/06/22 01:51:47.904�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [0.508 seconds]�[0m [sig-api-machinery] Namespaces [Serial] �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should apply a finalizer to a Namespace [Conformance] �[38;5;243mtest/e2e/apimachinery/namespace.go:394�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:51:47.432�[0m Nov 6 01:51:47.432: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m11/06/22 01:51:47.433�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:51:47.526�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:51:47.58�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply a finalizer to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:394 �[1mSTEP:�[0m Creating namespace "e2e-ns-m8zs5" �[38;5;243m11/06/22 01:51:47.635�[0m Nov 6 01:51:47.722: INFO: Namespace "e2e-ns-m8zs5-4521" has []v1.FinalizerName{"kubernetes"} �[1mSTEP:�[0m Adding e2e finalizer to namespace "e2e-ns-m8zs5-4521" �[38;5;243m11/06/22 01:51:47.722�[0m Nov 6 01:51:47.781: INFO: Namespace "e2e-ns-m8zs5-4521" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} �[1mSTEP:�[0m Removing e2e finalizer from namespace "e2e-ns-m8zs5-4521" �[38;5;243m11/06/22 01:51:47.781�[0m Nov 6 01:51:47.841: INFO: Namespace "e2e-ns-m8zs5-4521" has []v1.FinalizerName{"kubernetes"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:51:47.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "namespaces-128" for this suite. �[38;5;243m11/06/22 01:51:47.873�[0m �[1mSTEP:�[0m Destroying namespace "e2e-ns-m8zs5-4521" for this suite. �[38;5;243m11/06/22 01:51:47.904�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould ensure that all services are removed when a namespace is deleted [Conformance]�[0m �[38;5;243mtest/e2e/apimachinery/namespace.go:251�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:51:47.95�[0m Nov 6 01:51:47.950: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m11/06/22 01:51:47.951�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:51:48.043�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:51:48.097�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:251 �[1mSTEP:�[0m Creating a test namespace �[38;5;243m11/06/22 01:51:48.153�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:51:48.246�[0m �[1mSTEP:�[0m Creating a service in the namespace �[38;5;243m11/06/22 01:51:48.302�[0m �[1mSTEP:�[0m Deleting the namespace �[38;5;243m11/06/22 01:51:48.354�[0m �[1mSTEP:�[0m Waiting for the namespace to be removed. �[38;5;243m11/06/22 01:51:48.388�[0m �[1mSTEP:�[0m Recreating the namespace �[38;5;243m11/06/22 01:51:54.421�[0m �[1mSTEP:�[0m Verifying there is no service in the namespace �[38;5;243m11/06/22 01:51:54.515�[0m [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:51:54.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "namespaces-391" for this suite. �[38;5;243m11/06/22 01:51:54.576�[0m �[1mSTEP:�[0m Destroying namespace "nsdeletetest-6080" for this suite. �[38;5;243m11/06/22 01:51:54.609�[0m Nov 6 01:51:54.637: INFO: Namespace nsdeletetest-6080 was already deleted �[1mSTEP:�[0m Destroying namespace "nsdeletetest-8812" for this suite. �[38;5;243m11/06/22 01:51:54.637�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [6.722 seconds]�[0m [sig-api-machinery] Namespaces [Serial] �[38;5;243mtest/e2e/apimachinery/framework.go:23�[0m should ensure that all services are removed when a namespace is deleted [Conformance] �[38;5;243mtest/e2e/apimachinery/namespace.go:251�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:51:47.95�[0m Nov 6 01:51:47.950: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename namespaces �[38;5;243m11/06/22 01:51:47.951�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:51:48.043�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:51:48.097�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:251 �[1mSTEP:�[0m Creating a test namespace �[38;5;243m11/06/22 01:51:48.153�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:51:48.246�[0m �[1mSTEP:�[0m Creating a service in the namespace �[38;5;243m11/06/22 01:51:48.302�[0m �[1mSTEP:�[0m Deleting the namespace �[38;5;243m11/06/22 01:51:48.354�[0m �[1mSTEP:�[0m Waiting for the namespace to be removed. �[38;5;243m11/06/22 01:51:48.388�[0m �[1mSTEP:�[0m Recreating the namespace �[38;5;243m11/06/22 01:51:54.421�[0m �[1mSTEP:�[0m Verifying there is no service in the namespace �[38;5;243m11/06/22 01:51:54.515�[0m [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:51:54.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "namespaces-391" for this suite. �[38;5;243m11/06/22 01:51:54.576�[0m �[1mSTEP:�[0m Destroying namespace "nsdeletetest-6080" for this suite. �[38;5;243m11/06/22 01:51:54.609�[0m Nov 6 01:51:54.637: INFO: Namespace nsdeletetest-6080 was already deleted �[1mSTEP:�[0m Destroying namespace "nsdeletetest-8812" for this suite. �[38;5;243m11/06/22 01:51:54.637�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould list and delete a collection of DaemonSets [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:823�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:51:54.679�[0m Nov 6 01:51:54.679: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/06/22 01:51:54.68�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:51:54.772�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:51:54.826�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:823 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m11/06/22 01:51:55.011�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/06/22 01:51:55.045�[0m Nov 6 01:51:55.078: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:51:55.112: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:51:55.112: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:51:56.145: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:51:56.174: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:51:56.174: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:51:57.143: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:51:57.172: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:51:57.172: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:51:58.144: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:51:58.198: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:51:58.198: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:51:59.143: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:51:59.177: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:51:59.177: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:52:00.144: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:52:00.173: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 01:52:00.173: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m listing all DeamonSets �[38;5;243m11/06/22 01:52:00.201�[0m �[1mSTEP:�[0m DeleteCollection of the DaemonSets �[38;5;243m11/06/22 01:52:00.23�[0m �[1mSTEP:�[0m Verify that ReplicaSets have been deleted �[38;5;243m11/06/22 01:52:00.264�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 Nov 6 01:52:00.351: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6456"},"items":null} Nov 6 01:52:00.380: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6456"},"items":[{"metadata":{"name":"daemon-set-lrjjx","generateName":"daemon-set-","namespace":"daemonsets-4006","uid":"d347319e-009d-4c31-bb21-27d64b61a76c","resourceVersion":"6455","creationTimestamp":"2022-11-06T01:51:55Z","deletionTimestamp":"2022-11-06T01:52:30Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"65fbd496f8","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"2516af64d957bcc11fe3beb2cb5a7a8d5151a70c38bc3330ea36b16f449a7f1d","cni.projectcalico.org/podIP":"192.168.41.84/32","cni.projectcalico.org/podIPs":"192.168.41.84/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"93a96c66-98e4-4d7e-9d4d-f89b7fd42542","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93a96c66-98e4-4d7e-9d4d-f89b7fd42542\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.41.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-rl6g5","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-rl6g5","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-ppc2q","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-ppc2q"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:55Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:59Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:59Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:55Z"}],"hostIP":"10.1.0.4","podIP":"192.168.41.84","podIPs":[{"ip":"192.168.41.84"}],"startTime":"2022-11-06T01:51:55Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-11-06T01:51:58Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://0f7430a0094f4fb04243d0fef71ffe5a993ef503194709322612bf97e7a0b793","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-pvmvh","generateName":"daemon-set-","namespace":"daemonsets-4006","uid":"9a815668-4c47-4ece-9660-9a67cb268e7c","resourceVersion":"6456","creationTimestamp":"2022-11-06T01:51:55Z","deletionTimestamp":"2022-11-06T01:52:30Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"65fbd496f8","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"0adab00131eee2778bb10b76e10e5b95eaeaaeb11db9b093f71bcfb18c1345c7","cni.projectcalico.org/podIP":"192.168.43.213/32","cni.projectcalico.org/podIPs":"192.168.43.213/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"93a96c66-98e4-4d7e-9d4d-f89b7fd42542","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93a96c66-98e4-4d7e-9d4d-f89b7fd42542\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.43.213\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-bcs2x","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-bcs2x","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-6qqvv","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-6qqvv"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:55Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:59Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:59Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:55Z"}],"hostIP":"10.1.0.5","podIP":"192.168.43.213","podIPs":[{"ip":"192.168.43.213"}],"startTime":"2022-11-06T01:51:55Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-11-06T01:51:59Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://4a1443d3d0749cd2be26ff6e56a477058f9e3f43fcc832c89d95c03106faa2ec","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:52:00.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-4006" for this suite. �[38;5;243m11/06/22 01:52:00.499�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [5.858 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should list and delete a collection of DaemonSets [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:823�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:51:54.679�[0m Nov 6 01:51:54.679: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/06/22 01:51:54.68�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:51:54.772�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:51:54.826�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:823 �[1mSTEP:�[0m Creating simple DaemonSet "daemon-set" �[38;5;243m11/06/22 01:51:55.011�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/06/22 01:51:55.045�[0m Nov 6 01:51:55.078: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:51:55.112: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:51:55.112: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:51:56.145: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:51:56.174: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:51:56.174: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:51:57.143: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:51:57.172: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:51:57.172: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:51:58.144: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:51:58.198: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:51:58.198: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:51:59.143: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:51:59.177: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 01:51:59.177: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 01:52:00.144: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 01:52:00.173: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 01:52:00.173: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m listing all DeamonSets �[38;5;243m11/06/22 01:52:00.201�[0m �[1mSTEP:�[0m DeleteCollection of the DaemonSets �[38;5;243m11/06/22 01:52:00.23�[0m �[1mSTEP:�[0m Verify that ReplicaSets have been deleted �[38;5;243m11/06/22 01:52:00.264�[0m [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 Nov 6 01:52:00.351: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6456"},"items":null} Nov 6 01:52:00.380: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6456"},"items":[{"metadata":{"name":"daemon-set-lrjjx","generateName":"daemon-set-","namespace":"daemonsets-4006","uid":"d347319e-009d-4c31-bb21-27d64b61a76c","resourceVersion":"6455","creationTimestamp":"2022-11-06T01:51:55Z","deletionTimestamp":"2022-11-06T01:52:30Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"65fbd496f8","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"2516af64d957bcc11fe3beb2cb5a7a8d5151a70c38bc3330ea36b16f449a7f1d","cni.projectcalico.org/podIP":"192.168.41.84/32","cni.projectcalico.org/podIPs":"192.168.41.84/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"93a96c66-98e4-4d7e-9d4d-f89b7fd42542","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93a96c66-98e4-4d7e-9d4d-f89b7fd42542\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.41.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-rl6g5","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-rl6g5","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-ppc2q","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-ppc2q"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:55Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:59Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:59Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:55Z"}],"hostIP":"10.1.0.4","podIP":"192.168.41.84","podIPs":[{"ip":"192.168.41.84"}],"startTime":"2022-11-06T01:51:55Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-11-06T01:51:58Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://0f7430a0094f4fb04243d0fef71ffe5a993ef503194709322612bf97e7a0b793","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-pvmvh","generateName":"daemon-set-","namespace":"daemonsets-4006","uid":"9a815668-4c47-4ece-9660-9a67cb268e7c","resourceVersion":"6456","creationTimestamp":"2022-11-06T01:51:55Z","deletionTimestamp":"2022-11-06T01:52:30Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"65fbd496f8","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"0adab00131eee2778bb10b76e10e5b95eaeaaeb11db9b093f71bcfb18c1345c7","cni.projectcalico.org/podIP":"192.168.43.213/32","cni.projectcalico.org/podIPs":"192.168.43.213/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"93a96c66-98e4-4d7e-9d4d-f89b7fd42542","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93a96c66-98e4-4d7e-9d4d-f89b7fd42542\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2022-11-06T01:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.43.213\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-bcs2x","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-bcs2x","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-6qqvv","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-6qqvv"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:55Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:59Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:59Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-11-06T01:51:55Z"}],"hostIP":"10.1.0.5","podIP":"192.168.43.213","podIPs":[{"ip":"192.168.43.213"}],"startTime":"2022-11-06T01:51:55Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-11-06T01:51:59Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://4a1443d3d0749cd2be26ff6e56a477058f9e3f43fcc832c89d95c03106faa2ec","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 01:52:00.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-4006" for this suite. �[38;5;243m11/06/22 01:52:00.499�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) �[38;5;243m[Serial] [Slow] Deployment (Pod Resource)�[0m �[1mShould scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:154�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:52:00.538�[0m Nov 6 01:52:00.538: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 01:52:00.539�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:52:00.628�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:52:00.683�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:154 Nov 6 01:52:00.738: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 01:52:00.739�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-4601 �[38;5;243m11/06/22 01:52:00.779�[0m I1106 01:52:00.812215 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-4601, replica count: 1 I1106 01:52:10.865441 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 01:52:10.865�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-4601 �[38;5;243m11/06/22 01:52:10.91�[0m I1106 01:52:10.944302 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-4601, replica count: 1 I1106 01:52:20.998627 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 01:52:26.002: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 6 01:52:26.031: INFO: RC test-deployment: consume 0 millicores in total Nov 6 01:52:26.031: INFO: RC test-deployment: disabling CPU consumption Nov 6 01:52:26.031: INFO: RC test-deployment: consume 250 MB in total Nov 6 01:52:26.031: INFO: RC test-deployment: setting consumption to 250 MB in total Nov 6 01:52:26.031: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:52:26.031: INFO: RC test-deployment: consume custom metric 0 in total Nov 6 01:52:26.031: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:52:26.031: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 6 01:52:26.094: INFO: waiting for 3 replicas (current: 1) Nov 6 01:52:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:52:56.120: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:52:56.120: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:53:06.122: INFO: waiting for 3 replicas (current: 2) Nov 6 01:53:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:53:26.156: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:53:26.156: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:53:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:53:56.204: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:53:56.205: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:54:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:54:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:54:26.243: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:54:26.244: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:54:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:54:56.280: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:54:56.280: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:55:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:55:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:55:26.317: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:55:26.317: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:55:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:55:56.354: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:55:56.355: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:56:06.122: INFO: waiting for 3 replicas (current: 2) Nov 6 01:56:26.122: INFO: waiting for 3 replicas (current: 2) Nov 6 01:56:26.402: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:56:26.402: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:56:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:56:56.438: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:56:56.438: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:57:06.126: INFO: waiting for 3 replicas (current: 2) Nov 6 01:57:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:57:26.475: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:57:26.475: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:57:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:57:56.512: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:57:56.512: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:58:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:58:26.124: INFO: waiting for 3 replicas (current: 2) Nov 6 01:58:26.550: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:58:26.550: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:58:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:58:56.586: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:58:56.586: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:59:06.122: INFO: waiting for 3 replicas (current: 2) Nov 6 01:59:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:59:26.623: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:59:26.624: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:59:46.122: INFO: waiting for 3 replicas (current: 2) Nov 6 01:59:56.659: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:59:56.659: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:00:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:00:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:00:26.696: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:00:26.696: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:00:46.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:00:56.733: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:00:56.734: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:01:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:01:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:01:26.771: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:01:26.771: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:01:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:01:56.807: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:01:56.808: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:02:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:02:26.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:02:26.849: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:02:26.849: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:02:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:02:56.885: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:02:56.885: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:03:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:03:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:03:26.922: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:03:26.922: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:03:46.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:03:56.958: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:03:56.960: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:04:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:04:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:04:26.996: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:04:26.996: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:04:46.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:04:57.032: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:04:57.032: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:05:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:05:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:05:27.067: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:05:27.067: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:05:46.125: INFO: waiting for 3 replicas (current: 2) Nov 6 02:05:57.104: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:05:57.105: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:06:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:06:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:06:27.142: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:06:27.142: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:06:46.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:06:57.179: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:06:57.179: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:07:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:07:26.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:07:26.151: INFO: waiting for 3 replicas (current: 2) Nov 6 02:07:26.151: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205cd0>: { s: "timed out waiting for the condition", } Nov 6 02:07:26.151: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00077de68, {0x74a0e0e?, 0xc002f31f80?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000fcdd10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x747b2ea, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.1.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:155 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/06/22 02:07:26.185�[0m Nov 6 02:07:26.185: INFO: RC test-deployment: stopping metric consumer Nov 6 02:07:26.185: INFO: RC test-deployment: stopping CPU consumer Nov 6 02:07:26.185: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-4601, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:07:36.186�[0m Nov 6 02:07:36.296: INFO: Deleting Deployment.apps test-deployment took: 31.675607ms Nov 6 02:07:36.398: INFO: Terminating Deployment.apps test-deployment pods took: 101.010916ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-4601, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:07:38.653�[0m Nov 6 02:07:38.766: INFO: Deleting ReplicationController test-deployment-ctrl took: 33.445478ms Nov 6 02:07:38.867: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 101.315935ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/node/init/init.go:32 Nov 6 02:07:40.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/06/22 02:07:40.756�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-4601". �[38;5;243m11/06/22 02:07:40.756�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/06/22 02:07:40.785�[0m Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:00 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 1 Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:00 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-g9jlw Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:00 +0000 UTC - event for test-deployment-669bb6996d-g9jlw: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4601/test-deployment-669bb6996d-g9jlw to capz-conf-ppc2q Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:03 +0000 UTC - event for test-deployment-669bb6996d-g9jlw: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:03 +0000 UTC - event for test-deployment-669bb6996d-g9jlw: {kubelet capz-conf-ppc2q} Created: Created container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:04 +0000 UTC - event for test-deployment-669bb6996d-g9jlw: {kubelet capz-conf-ppc2q} Started: Started container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:10 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-8bm29 Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:10 +0000 UTC - event for test-deployment-ctrl-8bm29: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4601/test-deployment-ctrl-8bm29 to capz-conf-6qqvv Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:13 +0000 UTC - event for test-deployment-ctrl-8bm29: {kubelet capz-conf-6qqvv} Created: Created container test-deployment-ctrl Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:13 +0000 UTC - event for test-deployment-ctrl-8bm29: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:14 +0000 UTC - event for test-deployment-ctrl-8bm29: {kubelet capz-conf-6qqvv} Started: Started container test-deployment-ctrl Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:41 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: memory resource utilization (percentage of request) above target Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:41 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 2 from 1 Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:41 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-zhc7m Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:41 +0000 UTC - event for test-deployment-669bb6996d-zhc7m: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4601/test-deployment-669bb6996d-zhc7m to capz-conf-6qqvv Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:43 +0000 UTC - event for test-deployment-669bb6996d-zhc7m: {kubelet capz-conf-6qqvv} Created: Created container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:43 +0000 UTC - event for test-deployment-669bb6996d-zhc7m: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:45 +0000 UTC - event for test-deployment-669bb6996d-zhc7m: {kubelet capz-conf-6qqvv} Started: Started container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 02:07:36 +0000 UTC - event for test-deployment-669bb6996d-g9jlw: {kubelet capz-conf-ppc2q} Killing: Stopping container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 02:07:36 +0000 UTC - event for test-deployment-669bb6996d-zhc7m: {kubelet capz-conf-6qqvv} Killing: Stopping container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 02:07:38 +0000 UTC - event for test-deployment-ctrl-8bm29: {kubelet capz-conf-6qqvv} Killing: Stopping container test-deployment-ctrl Nov 6 02:07:40.814: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 02:07:40.814: INFO: Nov 6 02:07:40.844: INFO: Logging node info for node capz-conf-6qqvv Nov 6 02:07:40.872: INFO: Node Info: &Node{ObjectMeta:{capz-conf-6qqvv 21ca7817-6572-4b5d-812e-ce0eb0d5f68a 7761 0 2022-11-06 01:05:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-6qqvv kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-qszhv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.43.193 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:58:86:f4 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 01:40:01 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}} status} {kubelet.exe Update v1 2022-11-06 02:05:38 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-6qqvv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:38 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:38 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:38 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:05:38 +0000 UTC,LastTransitionTime:2022-11-06 01:05:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-6qqvv,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-6qqvv,SystemUUID:4FBA08C6-3CF7-43A9-B47F-5DD6399E03F4,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:07:40.873: INFO: Logging kubelet events for node capz-conf-6qqvv Nov 6 02:07:40.901: INFO: Logging pods the kubelet thinks is on node capz-conf-6qqvv Nov 6 02:07:40.946: INFO: calico-node-windows-wq7jf started at 2022-11-06 01:05:13 +0000 UTC (1+2 container statuses recorded) Nov 6 02:07:40.946: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:07:40.946: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 02:07:40.946: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 02:07:40.946: INFO: containerd-logger-4c4v9 started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:40.946: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 02:07:40.946: INFO: csi-proxy-d7klv started at 2022-11-06 01:05:43 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:40.946: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 02:07:40.946: INFO: kube-proxy-windows-mg9dn started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:40.946: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:07:41.120: INFO: Latency metrics for node capz-conf-6qqvv Nov 6 02:07:41.120: INFO: Logging node info for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:07:41.149: INFO: Node Info: &Node{ObjectMeta:{capz-conf-gdu8bn-control-plane-tjg6t 1b062db8-a1d5-4d72-b97f-3f553f9a80bc 7757 0 2022-11-06 01:02:47 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-gdu8bn-control-plane-tjg6t kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-control-plane-r9dv5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.255.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-06 01:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-06 01:02:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-06 01:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-06 01:03:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-06 01:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-06 02:05:35 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-gdu8bn-control-plane-tjg6t,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-06 01:03:36 +0000 UTC,LastTransitionTime:2022-11-06 01:03:36 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:35 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:35 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:35 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:05:35 +0000 UTC,LastTransitionTime:2022-11-06 01:03:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-gdu8bn-control-plane-tjg6t,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78d5859e57514e33b16c735e58b1e9ed,SystemUUID:000037f3-aea5-d84d-b6e2-269548336f74,BootID:2d661860-3c1f-4907-aa6c-ac6c2ce1dffc,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:132977107,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:120025913,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:66202310,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:53027640,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:07:41.149: INFO: Logging kubelet events for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:07:41.178: INFO: Logging pods the kubelet thinks is on node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:07:41.227: INFO: etcd-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container etcd ready: true, restart count 0 Nov 6 02:07:41.227: INFO: kube-apiserver-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 02:07:41.227: INFO: calico-node-4tbpv started at 2022-11-06 01:03:13 +0000 UTC (2+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 6 02:07:41.227: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:07:41.227: INFO: Container calico-node ready: true, restart count 0 Nov 6 02:07:41.227: INFO: calico-kube-controllers-56c5ff4bf8-c9gck started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 6 02:07:41.227: INFO: metrics-server-954b56d74-tp2lc started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container metrics-server ready: true, restart count 0 Nov 6 02:07:41.227: INFO: coredns-64475449fc-jxwjm started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container coredns ready: true, restart count 0 Nov 6 02:07:41.227: INFO: kube-scheduler-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:54 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container kube-scheduler ready: true, restart count 0 Nov 6 02:07:41.227: INFO: kube-proxy-gv5gt started at 2022-11-06 01:02:55 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:07:41.227: INFO: coredns-64475449fc-9kgrz started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container coredns ready: true, restart count 0 Nov 6 02:07:41.227: INFO: kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 6 02:07:41.400: INFO: Latency metrics for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:07:41.401: INFO: Logging node info for node capz-conf-ppc2q Nov 6 02:07:41.430: INFO: Node Info: &Node{ObjectMeta:{capz-conf-ppc2q 0e9bff17-74db-40c4-85fd-565404c5c796 7769 0 2022-11-06 01:05:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-ppc2q kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-swkgv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.41.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:f9:7f:62 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 01:40:02 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}} status} {kubelet.exe Update v1 2022-11-06 02:05:42 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-ppc2q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:42 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:42 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:42 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:05:42 +0000 UTC,LastTransitionTime:2022-11-06 01:05:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-ppc2q,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-ppc2q,SystemUUID:D6A1F803-1C65-4D68-BCD7-387A75C6EDBD,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:07:41.431: INFO: Logging kubelet events for node capz-conf-ppc2q Nov 6 02:07:41.459: INFO: Logging pods the kubelet thinks is on node capz-conf-ppc2q Nov 6 02:07:41.506: INFO: containerd-logger-s25tr started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.506: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 02:07:41.506: INFO: csi-proxy-vqp4q started at 2022-11-06 01:05:39 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.506: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 02:07:41.506: INFO: kube-proxy-windows-vmt8g started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.506: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:07:41.506: INFO: calico-node-windows-hsdvh started at 2022-11-06 01:05:08 +0000 UTC (1+2 container statuses recorded) Nov 6 02:07:41.506: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:07:41.506: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 02:07:41.506: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 02:07:41.654: INFO: Latency metrics for node capz-conf-ppc2q [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4601" for this suite. �[38;5;243m11/06/22 02:07:41.654�[0m �[38;5;243m------------------------------�[0m �[38;5;9m• [FAILED] [941.152 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment (Pod Resource) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:153�[0m �[38;5;9m�[1m[It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:154�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 01:52:00.538�[0m Nov 6 01:52:00.538: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 01:52:00.539�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 01:52:00.628�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 01:52:00.683�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:154 Nov 6 01:52:00.738: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 01:52:00.739�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-4601 �[38;5;243m11/06/22 01:52:00.779�[0m I1106 01:52:00.812215 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-4601, replica count: 1 I1106 01:52:10.865441 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 01:52:10.865�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-4601 �[38;5;243m11/06/22 01:52:10.91�[0m I1106 01:52:10.944302 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-4601, replica count: 1 I1106 01:52:20.998627 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 01:52:26.002: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 6 01:52:26.031: INFO: RC test-deployment: consume 0 millicores in total Nov 6 01:52:26.031: INFO: RC test-deployment: disabling CPU consumption Nov 6 01:52:26.031: INFO: RC test-deployment: consume 250 MB in total Nov 6 01:52:26.031: INFO: RC test-deployment: setting consumption to 250 MB in total Nov 6 01:52:26.031: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:52:26.031: INFO: RC test-deployment: consume custom metric 0 in total Nov 6 01:52:26.031: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:52:26.031: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 6 01:52:26.094: INFO: waiting for 3 replicas (current: 1) Nov 6 01:52:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:52:56.120: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:52:56.120: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:53:06.122: INFO: waiting for 3 replicas (current: 2) Nov 6 01:53:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:53:26.156: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:53:26.156: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:53:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:53:56.204: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:53:56.205: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:54:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:54:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:54:26.243: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:54:26.244: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:54:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:54:56.280: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:54:56.280: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:55:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:55:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:55:26.317: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:55:26.317: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:55:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:55:56.354: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:55:56.355: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:56:06.122: INFO: waiting for 3 replicas (current: 2) Nov 6 01:56:26.122: INFO: waiting for 3 replicas (current: 2) Nov 6 01:56:26.402: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:56:26.402: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:56:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:56:56.438: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:56:56.438: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:57:06.126: INFO: waiting for 3 replicas (current: 2) Nov 6 01:57:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:57:26.475: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:57:26.475: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:57:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:57:56.512: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:57:56.512: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:58:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:58:26.124: INFO: waiting for 3 replicas (current: 2) Nov 6 01:58:26.550: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:58:26.550: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:58:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:58:56.586: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:58:56.586: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:59:06.122: INFO: waiting for 3 replicas (current: 2) Nov 6 01:59:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 01:59:26.623: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:59:26.624: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 01:59:46.122: INFO: waiting for 3 replicas (current: 2) Nov 6 01:59:56.659: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 01:59:56.659: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:00:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:00:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:00:26.696: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:00:26.696: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:00:46.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:00:56.733: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:00:56.734: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:01:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:01:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:01:26.771: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:01:26.771: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:01:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:01:56.807: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:01:56.808: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:02:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:02:26.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:02:26.849: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:02:26.849: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:02:46.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:02:56.885: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:02:56.885: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:03:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:03:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:03:26.922: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:03:26.922: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:03:46.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:03:56.958: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:03:56.960: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:04:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:04:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:04:26.996: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:04:26.996: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:04:46.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:04:57.032: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:04:57.032: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:05:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:05:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:05:27.067: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:05:27.067: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:05:46.125: INFO: waiting for 3 replicas (current: 2) Nov 6 02:05:57.104: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:05:57.105: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:06:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:06:26.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:06:27.142: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:06:27.142: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:06:46.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:06:57.179: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:06:57.179: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4601/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:07:06.123: INFO: waiting for 3 replicas (current: 2) Nov 6 02:07:26.122: INFO: waiting for 3 replicas (current: 2) Nov 6 02:07:26.151: INFO: waiting for 3 replicas (current: 2) Nov 6 02:07:26.151: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205cd0>: { s: "timed out waiting for the condition", } Nov 6 02:07:26.151: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00077de68, {0x74a0e0e?, 0xc002f31f80?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000fcdd10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x747b2ea, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.1.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:155 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/06/22 02:07:26.185�[0m Nov 6 02:07:26.185: INFO: RC test-deployment: stopping metric consumer Nov 6 02:07:26.185: INFO: RC test-deployment: stopping CPU consumer Nov 6 02:07:26.185: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-4601, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:07:36.186�[0m Nov 6 02:07:36.296: INFO: Deleting Deployment.apps test-deployment took: 31.675607ms Nov 6 02:07:36.398: INFO: Terminating Deployment.apps test-deployment pods took: 101.010916ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-4601, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:07:38.653�[0m Nov 6 02:07:38.766: INFO: Deleting ReplicationController test-deployment-ctrl took: 33.445478ms Nov 6 02:07:38.867: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 101.315935ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/node/init/init.go:32 Nov 6 02:07:40.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/06/22 02:07:40.756�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-4601". �[38;5;243m11/06/22 02:07:40.756�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/06/22 02:07:40.785�[0m Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:00 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 1 Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:00 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-g9jlw Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:00 +0000 UTC - event for test-deployment-669bb6996d-g9jlw: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4601/test-deployment-669bb6996d-g9jlw to capz-conf-ppc2q Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:03 +0000 UTC - event for test-deployment-669bb6996d-g9jlw: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:03 +0000 UTC - event for test-deployment-669bb6996d-g9jlw: {kubelet capz-conf-ppc2q} Created: Created container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:04 +0000 UTC - event for test-deployment-669bb6996d-g9jlw: {kubelet capz-conf-ppc2q} Started: Started container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:10 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-8bm29 Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:10 +0000 UTC - event for test-deployment-ctrl-8bm29: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4601/test-deployment-ctrl-8bm29 to capz-conf-6qqvv Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:13 +0000 UTC - event for test-deployment-ctrl-8bm29: {kubelet capz-conf-6qqvv} Created: Created container test-deployment-ctrl Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:13 +0000 UTC - event for test-deployment-ctrl-8bm29: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:14 +0000 UTC - event for test-deployment-ctrl-8bm29: {kubelet capz-conf-6qqvv} Started: Started container test-deployment-ctrl Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:41 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: memory resource utilization (percentage of request) above target Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:41 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 2 from 1 Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:41 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-zhc7m Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:41 +0000 UTC - event for test-deployment-669bb6996d-zhc7m: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-4601/test-deployment-669bb6996d-zhc7m to capz-conf-6qqvv Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:43 +0000 UTC - event for test-deployment-669bb6996d-zhc7m: {kubelet capz-conf-6qqvv} Created: Created container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:43 +0000 UTC - event for test-deployment-669bb6996d-zhc7m: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 02:07:40.786: INFO: At 2022-11-06 01:52:45 +0000 UTC - event for test-deployment-669bb6996d-zhc7m: {kubelet capz-conf-6qqvv} Started: Started container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 02:07:36 +0000 UTC - event for test-deployment-669bb6996d-g9jlw: {kubelet capz-conf-ppc2q} Killing: Stopping container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 02:07:36 +0000 UTC - event for test-deployment-669bb6996d-zhc7m: {kubelet capz-conf-6qqvv} Killing: Stopping container test-deployment Nov 6 02:07:40.786: INFO: At 2022-11-06 02:07:38 +0000 UTC - event for test-deployment-ctrl-8bm29: {kubelet capz-conf-6qqvv} Killing: Stopping container test-deployment-ctrl Nov 6 02:07:40.814: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 02:07:40.814: INFO: Nov 6 02:07:40.844: INFO: Logging node info for node capz-conf-6qqvv Nov 6 02:07:40.872: INFO: Node Info: &Node{ObjectMeta:{capz-conf-6qqvv 21ca7817-6572-4b5d-812e-ce0eb0d5f68a 7761 0 2022-11-06 01:05:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-6qqvv kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-qszhv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.43.193 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:58:86:f4 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 01:40:01 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}} status} {kubelet.exe Update v1 2022-11-06 02:05:38 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-6qqvv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:38 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:38 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:38 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:05:38 +0000 UTC,LastTransitionTime:2022-11-06 01:05:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-6qqvv,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-6qqvv,SystemUUID:4FBA08C6-3CF7-43A9-B47F-5DD6399E03F4,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:07:40.873: INFO: Logging kubelet events for node capz-conf-6qqvv Nov 6 02:07:40.901: INFO: Logging pods the kubelet thinks is on node capz-conf-6qqvv Nov 6 02:07:40.946: INFO: calico-node-windows-wq7jf started at 2022-11-06 01:05:13 +0000 UTC (1+2 container statuses recorded) Nov 6 02:07:40.946: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:07:40.946: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 02:07:40.946: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 02:07:40.946: INFO: containerd-logger-4c4v9 started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:40.946: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 02:07:40.946: INFO: csi-proxy-d7klv started at 2022-11-06 01:05:43 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:40.946: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 02:07:40.946: INFO: kube-proxy-windows-mg9dn started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:40.946: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:07:41.120: INFO: Latency metrics for node capz-conf-6qqvv Nov 6 02:07:41.120: INFO: Logging node info for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:07:41.149: INFO: Node Info: &Node{ObjectMeta:{capz-conf-gdu8bn-control-plane-tjg6t 1b062db8-a1d5-4d72-b97f-3f553f9a80bc 7757 0 2022-11-06 01:02:47 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-gdu8bn-control-plane-tjg6t kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-control-plane-r9dv5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.255.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-06 01:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-06 01:02:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-06 01:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-06 01:03:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-06 01:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-06 02:05:35 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-gdu8bn-control-plane-tjg6t,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-06 01:03:36 +0000 UTC,LastTransitionTime:2022-11-06 01:03:36 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:35 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:35 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:35 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:05:35 +0000 UTC,LastTransitionTime:2022-11-06 01:03:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-gdu8bn-control-plane-tjg6t,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78d5859e57514e33b16c735e58b1e9ed,SystemUUID:000037f3-aea5-d84d-b6e2-269548336f74,BootID:2d661860-3c1f-4907-aa6c-ac6c2ce1dffc,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:132977107,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:120025913,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:66202310,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:53027640,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:07:41.149: INFO: Logging kubelet events for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:07:41.178: INFO: Logging pods the kubelet thinks is on node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:07:41.227: INFO: etcd-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container etcd ready: true, restart count 0 Nov 6 02:07:41.227: INFO: kube-apiserver-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 02:07:41.227: INFO: calico-node-4tbpv started at 2022-11-06 01:03:13 +0000 UTC (2+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 6 02:07:41.227: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:07:41.227: INFO: Container calico-node ready: true, restart count 0 Nov 6 02:07:41.227: INFO: calico-kube-controllers-56c5ff4bf8-c9gck started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 6 02:07:41.227: INFO: metrics-server-954b56d74-tp2lc started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container metrics-server ready: true, restart count 0 Nov 6 02:07:41.227: INFO: coredns-64475449fc-jxwjm started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container coredns ready: true, restart count 0 Nov 6 02:07:41.227: INFO: kube-scheduler-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:54 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container kube-scheduler ready: true, restart count 0 Nov 6 02:07:41.227: INFO: kube-proxy-gv5gt started at 2022-11-06 01:02:55 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:07:41.227: INFO: coredns-64475449fc-9kgrz started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container coredns ready: true, restart count 0 Nov 6 02:07:41.227: INFO: kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.227: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 6 02:07:41.400: INFO: Latency metrics for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:07:41.401: INFO: Logging node info for node capz-conf-ppc2q Nov 6 02:07:41.430: INFO: Node Info: &Node{ObjectMeta:{capz-conf-ppc2q 0e9bff17-74db-40c4-85fd-565404c5c796 7769 0 2022-11-06 01:05:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-ppc2q kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-swkgv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.41.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:f9:7f:62 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 01:40:02 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}} status} {kubelet.exe Update v1 2022-11-06 02:05:42 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-ppc2q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:42 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:42 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:05:42 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:05:42 +0000 UTC,LastTransitionTime:2022-11-06 01:05:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-ppc2q,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-ppc2q,SystemUUID:D6A1F803-1C65-4D68-BCD7-387A75C6EDBD,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:07:41.431: INFO: Logging kubelet events for node capz-conf-ppc2q Nov 6 02:07:41.459: INFO: Logging pods the kubelet thinks is on node capz-conf-ppc2q Nov 6 02:07:41.506: INFO: containerd-logger-s25tr started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.506: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 02:07:41.506: INFO: csi-proxy-vqp4q started at 2022-11-06 01:05:39 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.506: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 02:07:41.506: INFO: kube-proxy-windows-vmt8g started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 02:07:41.506: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:07:41.506: INFO: calico-node-windows-hsdvh started at 2022-11-06 01:05:08 +0000 UTC (1+2 container statuses recorded) Nov 6 02:07:41.506: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:07:41.506: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 02:07:41.506: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 02:07:41.654: INFO: Latency metrics for node capz-conf-ppc2q [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4601" for this suite. �[38;5;243m11/06/22 02:07:41.654�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;9mNov 6 02:07:26.152: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition�[0m �[38;5;9mIn �[1m[It]�[0m�[38;5;9m at: �[1mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:209�[0m �[38;5;9mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00077de68, {0x74a0e0e?, 0xc002f31f80?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000fcdd10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x747b2ea, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.1.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:155 +0x88 �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould update pod when spec was updated and update strategy is RollingUpdate [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:374�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:07:41.694�[0m Nov 6 02:07:41.694: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/06/22 02:07:41.696�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:07:41.788�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:07:41.842�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:374 Nov 6 02:07:42.022: INFO: Creating simple daemon set daemon-set �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/06/22 02:07:42.059�[0m Nov 6 02:07:42.101: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:42.131: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:07:42.131: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:43.162: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:43.192: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:07:43.192: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:44.162: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:44.191: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:07:44.191: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:45.162: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:45.192: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:07:45.192: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:46.163: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:46.192: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:07:46.192: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:47.163: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:47.192: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:07:47.192: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:48.162: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:48.190: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 02:07:48.190: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Update daemon pods image. �[38;5;243m11/06/22 02:07:48.308�[0m �[1mSTEP:�[0m Check that daemon pods images are updated. �[38;5;243m11/06/22 02:07:48.389�[0m Nov 6 02:07:48.418: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:48.450: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:49.480: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:49.510: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:50.482: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:50.513: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:51.480: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:51.512: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:52.480: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:52.512: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:53.482: INFO: Pod daemon-set-4lb6c is not available Nov 6 02:07:53.482: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:53.512: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:54.484: INFO: Pod daemon-set-4lb6c is not available Nov 6 02:07:54.484: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:54.516: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:55.479: INFO: Pod daemon-set-4lb6c is not available Nov 6 02:07:55.479: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:55.510: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:56.480: INFO: Pod daemon-set-4lb6c is not available Nov 6 02:07:56.480: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:56.512: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:57.511: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:58.514: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:59.510: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:00.511: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:01.480: INFO: Pod daemon-set-2wwn2 is not available Nov 6 02:08:01.511: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node �[1mSTEP:�[0m Check that daemon pods are still running on every node of the cluster. �[38;5;243m11/06/22 02:08:01.511�[0m Nov 6 02:08:01.542: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:01.571: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:08:01.571: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:08:02.602: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:02.631: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:08:02.631: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:08:03.603: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:03.636: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:08:03.636: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:08:04.602: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:04.631: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:08:04.631: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:08:05.603: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:05.632: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:08:05.632: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:08:06.602: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:06.630: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 02:08:06.631: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m11/06/22 02:08:06.774�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-2384, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:08:06.774�[0m Nov 6 02:08:06.884: INFO: Deleting DaemonSet.extensions daemon-set took: 31.861636ms Nov 6 02:08:06.985: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.455923ms Nov 6 02:08:09.913: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:08:09.914: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Nov 6 02:08:09.942: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"8168"},"items":null} Nov 6 02:08:09.971: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"8168"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 02:08:10.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-2384" for this suite. �[38;5;243m11/06/22 02:08:10.09�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [28.430 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should update pod when spec was updated and update strategy is RollingUpdate [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:374�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:07:41.694�[0m Nov 6 02:07:41.694: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/06/22 02:07:41.696�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:07:41.788�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:07:41.842�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:374 Nov 6 02:07:42.022: INFO: Creating simple daemon set daemon-set �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/06/22 02:07:42.059�[0m Nov 6 02:07:42.101: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:42.131: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:07:42.131: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:43.162: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:43.192: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:07:43.192: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:44.162: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:44.191: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:07:44.191: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:45.162: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:45.192: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:07:45.192: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:46.163: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:46.192: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:07:46.192: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:47.163: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:47.192: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:07:47.192: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:07:48.162: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:48.190: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 02:07:48.190: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set �[1mSTEP:�[0m Update daemon pods image. �[38;5;243m11/06/22 02:07:48.308�[0m �[1mSTEP:�[0m Check that daemon pods images are updated. �[38;5;243m11/06/22 02:07:48.389�[0m Nov 6 02:07:48.418: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:48.450: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:49.480: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:49.510: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:50.482: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:50.513: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:51.480: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:51.512: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:52.480: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:52.512: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:53.482: INFO: Pod daemon-set-4lb6c is not available Nov 6 02:07:53.482: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:53.512: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:54.484: INFO: Pod daemon-set-4lb6c is not available Nov 6 02:07:54.484: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:54.516: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:55.479: INFO: Pod daemon-set-4lb6c is not available Nov 6 02:07:55.479: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:55.510: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:56.480: INFO: Pod daemon-set-4lb6c is not available Nov 6 02:07:56.480: INFO: Wrong image for pod: daemon-set-lfldg. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. Nov 6 02:07:56.512: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:57.511: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:58.514: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:07:59.510: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:00.511: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:01.480: INFO: Pod daemon-set-2wwn2 is not available Nov 6 02:08:01.511: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node �[1mSTEP:�[0m Check that daemon pods are still running on every node of the cluster. �[38;5;243m11/06/22 02:08:01.511�[0m Nov 6 02:08:01.542: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:01.571: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:08:01.571: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:08:02.602: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:02.631: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:08:02.631: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:08:03.603: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:03.636: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:08:03.636: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:08:04.602: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:04.631: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:08:04.631: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:08:05.603: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:05.632: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:08:05.632: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:08:06.602: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:08:06.630: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 6 02:08:06.631: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m11/06/22 02:08:06.774�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-2384, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:08:06.774�[0m Nov 6 02:08:06.884: INFO: Deleting DaemonSet.extensions daemon-set took: 31.861636ms Nov 6 02:08:06.985: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.455923ms Nov 6 02:08:09.913: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:08:09.914: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Nov 6 02:08:09.942: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"8168"},"items":null} Nov 6 02:08:09.971: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"8168"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 02:08:10.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-2384" for this suite. �[38;5;243m11/06/22 02:08:10.09�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates basic preemption works [Conformance]�[0m �[38;5;243mtest/e2e/scheduling/preemption.go:125�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:08:10.125�[0m Nov 6 02:08:10.125: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/06/22 02:08:10.127�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:08:10.22�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:08:10.275�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Nov 6 02:08:10.424: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 02:09:10.679: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:125 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m11/06/22 02:09:10.708�[0m Nov 6 02:09:10.784: INFO: Created pod: pod0-0-sched-preemption-low-priority Nov 6 02:09:10.819: INFO: Created pod: pod0-1-sched-preemption-medium-priority Nov 6 02:09:10.892: INFO: Created pod: pod1-0-sched-preemption-medium-priority Nov 6 02:09:10.928: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m11/06/22 02:09:10.928�[0m Nov 6 02:09:10.929: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-2756" to be "running" Nov 6 02:09:10.957: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 28.120159ms Nov 6 02:09:12.986: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05681934s Nov 6 02:09:14.994: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065763366s Nov 6 02:09:16.987: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058296685s Nov 6 02:09:18.987: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.058738575s Nov 6 02:09:18.988: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Nov 6 02:09:18.988: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-2756" to be "running" Nov 6 02:09:19.016: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 28.509505ms Nov 6 02:09:21.046: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.057991667s Nov 6 02:09:21.046: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Nov 6 02:09:21.046: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-2756" to be "running" Nov 6 02:09:21.074: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 28.795732ms Nov 6 02:09:21.074: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Nov 6 02:09:21.074: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-2756" to be "running" Nov 6 02:09:21.103: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 28.220061ms Nov 6 02:09:21.103: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a high priority pod that has same requirements as that of lower priority pod �[38;5;243m11/06/22 02:09:21.103�[0m Nov 6 02:09:21.135: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-2756" to be "running" Nov 6 02:09:21.166: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 31.359775ms Nov 6 02:09:23.196: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060828963s Nov 6 02:09:25.195: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060115174s Nov 6 02:09:27.196: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061105197s Nov 6 02:09:29.196: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.060785106s Nov 6 02:09:29.196: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 02:09:29.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-2756" for this suite. �[38;5;243m11/06/22 02:09:29.537�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [79.444 seconds]�[0m [sig-scheduling] SchedulerPreemption [Serial] �[38;5;243mtest/e2e/scheduling/framework.go:40�[0m validates basic preemption works [Conformance] �[38;5;243mtest/e2e/scheduling/preemption.go:125�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:08:10.125�[0m Nov 6 02:08:10.125: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/06/22 02:08:10.127�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:08:10.22�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:08:10.275�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Nov 6 02:08:10.424: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 02:09:10.679: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:125 �[1mSTEP:�[0m Create pods that use 4/5 of node resources. �[38;5;243m11/06/22 02:09:10.708�[0m Nov 6 02:09:10.784: INFO: Created pod: pod0-0-sched-preemption-low-priority Nov 6 02:09:10.819: INFO: Created pod: pod0-1-sched-preemption-medium-priority Nov 6 02:09:10.892: INFO: Created pod: pod1-0-sched-preemption-medium-priority Nov 6 02:09:10.928: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP:�[0m Wait for pods to be scheduled. �[38;5;243m11/06/22 02:09:10.928�[0m Nov 6 02:09:10.929: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-2756" to be "running" Nov 6 02:09:10.957: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 28.120159ms Nov 6 02:09:12.986: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05681934s Nov 6 02:09:14.994: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065763366s Nov 6 02:09:16.987: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058296685s Nov 6 02:09:18.987: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.058738575s Nov 6 02:09:18.988: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Nov 6 02:09:18.988: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-2756" to be "running" Nov 6 02:09:19.016: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 28.509505ms Nov 6 02:09:21.046: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.057991667s Nov 6 02:09:21.046: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Nov 6 02:09:21.046: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-2756" to be "running" Nov 6 02:09:21.074: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 28.795732ms Nov 6 02:09:21.074: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Nov 6 02:09:21.074: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-2756" to be "running" Nov 6 02:09:21.103: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 28.220061ms Nov 6 02:09:21.103: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" �[1mSTEP:�[0m Run a high priority pod that has same requirements as that of lower priority pod �[38;5;243m11/06/22 02:09:21.103�[0m Nov 6 02:09:21.135: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-2756" to be "running" Nov 6 02:09:21.166: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 31.359775ms Nov 6 02:09:23.196: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060828963s Nov 6 02:09:25.195: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060115174s Nov 6 02:09:27.196: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061105197s Nov 6 02:09:29.196: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.060785106s Nov 6 02:09:29.196: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 02:09:29.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-2756" for this suite. �[38;5;243m11/06/22 02:09:29.537�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-storage] EmptyDir wrapper volumes�[0m �[1mshould not cause race condition when used for configmaps [Serial] [Conformance]�[0m �[38;5;243mtest/e2e/storage/empty_dir_wrapper.go:189�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:09:29.572�[0m Nov 6 02:09:29.572: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename emptydir-wrapper �[38;5;243m11/06/22 02:09:29.574�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:09:29.662�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:09:29.716�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:31 [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 �[1mSTEP:�[0m Creating 50 configmaps �[38;5;243m11/06/22 02:09:29.772�[0m �[1mSTEP:�[0m Creating RC which spawns configmap-volume pods �[38;5;243m11/06/22 02:09:31.7�[0m Nov 6 02:09:31.803: INFO: Pod name wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea: Found 1 pods out of 5 Nov 6 02:09:36.844: INFO: Pod name wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea: Found 5 pods out of 5 �[1mSTEP:�[0m Ensuring each pod is running �[38;5;243m11/06/22 02:09:36.844�[0m Nov 6 02:09:36.844: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:09:36.890: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 45.902341ms Nov 6 02:09:38.922: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077676329s Nov 6 02:09:40.921: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0775304s Nov 6 02:09:42.920: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076622985s Nov 6 02:09:44.926: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082073862s Nov 6 02:09:46.928: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08440795s Nov 6 02:09:48.921: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077010985s Nov 6 02:09:50.922: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Running", Reason="", readiness=true. Elapsed: 14.07842979s Nov 6 02:09:50.922: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b" satisfied condition "running" Nov 6 02:09:50.922: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-blllx" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:09:50.953: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-blllx": Phase="Running", Reason="", readiness=true. Elapsed: 30.641054ms Nov 6 02:09:50.953: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-blllx" satisfied condition "running" Nov 6 02:09:50.953: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-fl7rz" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:09:50.983: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-fl7rz": Phase="Running", Reason="", readiness=true. Elapsed: 30.3927ms Nov 6 02:09:50.984: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-fl7rz" satisfied condition "running" Nov 6 02:09:50.984: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-qzshl" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:09:51.014: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-qzshl": Phase="Running", Reason="", readiness=true. Elapsed: 30.22276ms Nov 6 02:09:51.014: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-qzshl" satisfied condition "running" Nov 6 02:09:51.014: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-vjzhl" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:09:51.044: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-vjzhl": Phase="Pending", Reason="", readiness=false. Elapsed: 30.313148ms Nov 6 02:09:53.076: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-vjzhl": Phase="Running", Reason="", readiness=true. Elapsed: 2.062204002s Nov 6 02:09:53.076: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-vjzhl" satisfied condition "running" �[1mSTEP:�[0m deleting ReplicationController wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea in namespace emptydir-wrapper-3725, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:09:53.076�[0m Nov 6 02:09:53.197: INFO: Deleting ReplicationController wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea took: 38.77705ms Nov 6 02:09:53.299: INFO: Terminating ReplicationController wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea pods took: 101.287053ms �[1mSTEP:�[0m Creating RC which spawns configmap-volume pods �[38;5;243m11/06/22 02:09:57.13�[0m Nov 6 02:09:57.211: INFO: Pod name wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d: Found 1 pods out of 5 Nov 6 02:10:02.252: INFO: Pod name wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d: Found 5 pods out of 5 �[1mSTEP:�[0m Ensuring each pod is running �[38;5;243m11/06/22 02:10:02.252�[0m Nov 6 02:10:02.252: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:02.298: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 46.432456ms Nov 6 02:10:04.331: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078823261s Nov 6 02:10:06.329: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077528178s Nov 6 02:10:08.331: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078800665s Nov 6 02:10:10.331: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079327743s Nov 6 02:10:12.330: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.077606009s Nov 6 02:10:14.329: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077534636s Nov 6 02:10:16.331: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Running", Reason="", readiness=true. Elapsed: 14.079529831s Nov 6 02:10:16.332: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl" satisfied condition "running" Nov 6 02:10:16.332: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-jvz4j" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:16.366: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-jvz4j": Phase="Pending", Reason="", readiness=false. Elapsed: 34.022218ms Nov 6 02:10:18.397: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-jvz4j": Phase="Running", Reason="", readiness=true. Elapsed: 2.065256293s Nov 6 02:10:18.397: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-jvz4j" satisfied condition "running" Nov 6 02:10:18.397: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-lvvxj" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:18.427: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-lvvxj": Phase="Running", Reason="", readiness=true. Elapsed: 29.94084ms Nov 6 02:10:18.427: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-lvvxj" satisfied condition "running" Nov 6 02:10:18.427: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-nxjdm" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:18.460: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-nxjdm": Phase="Running", Reason="", readiness=true. Elapsed: 32.684146ms Nov 6 02:10:18.460: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-nxjdm" satisfied condition "running" Nov 6 02:10:18.460: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-xf5tb" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:18.490: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-xf5tb": Phase="Running", Reason="", readiness=true. Elapsed: 30.300517ms Nov 6 02:10:18.490: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-xf5tb" satisfied condition "running" �[1mSTEP:�[0m deleting ReplicationController wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d in namespace emptydir-wrapper-3725, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:10:18.49�[0m Nov 6 02:10:18.610: INFO: Deleting ReplicationController wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d took: 36.206094ms Nov 6 02:10:18.711: INFO: Terminating ReplicationController wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d pods took: 100.909534ms �[1mSTEP:�[0m Creating RC which spawns configmap-volume pods �[38;5;243m11/06/22 02:10:23.343�[0m Nov 6 02:10:23.420: INFO: Pod name wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b: Found 1 pods out of 5 Nov 6 02:10:28.459: INFO: Pod name wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b: Found 5 pods out of 5 �[1mSTEP:�[0m Ensuring each pod is running �[38;5;243m11/06/22 02:10:28.459�[0m Nov 6 02:10:28.460: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:28.507: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 47.830537ms Nov 6 02:10:30.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078986904s Nov 6 02:10:32.538: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078616171s Nov 6 02:10:34.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079162919s Nov 6 02:10:36.538: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078780545s Nov 6 02:10:38.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.079505853s Nov 6 02:10:40.540: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 12.080057875s Nov 6 02:10:42.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 14.079839081s Nov 6 02:10:44.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Running", Reason="", readiness=true. Elapsed: 16.079251711s Nov 6 02:10:44.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p" satisfied condition "running" Nov 6 02:10:44.539: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-hjwgf" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:44.569: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-hjwgf": Phase="Running", Reason="", readiness=true. Elapsed: 29.946195ms Nov 6 02:10:44.569: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-hjwgf" satisfied condition "running" Nov 6 02:10:44.569: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-m9jmv" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:44.599: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-m9jmv": Phase="Running", Reason="", readiness=true. Elapsed: 29.843796ms Nov 6 02:10:44.599: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-m9jmv" satisfied condition "running" Nov 6 02:10:44.599: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-t6krs" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:44.629: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-t6krs": Phase="Running", Reason="", readiness=true. Elapsed: 29.738473ms Nov 6 02:10:44.629: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-t6krs" satisfied condition "running" Nov 6 02:10:44.629: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-wdrzq" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:44.659: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-wdrzq": Phase="Running", Reason="", readiness=true. Elapsed: 30.109826ms Nov 6 02:10:44.659: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-wdrzq" satisfied condition "running" �[1mSTEP:�[0m deleting ReplicationController wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b in namespace emptydir-wrapper-3725, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:10:44.659�[0m Nov 6 02:10:44.780: INFO: Deleting ReplicationController wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b took: 40.582971ms Nov 6 02:10:44.980: INFO: Terminating ReplicationController wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b pods took: 200.223952ms �[1mSTEP:�[0m Cleaning up the configMaps �[38;5;243m11/06/22 02:10:48.881�[0m [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/node/init/init.go:32 Nov 6 02:10:50.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "emptydir-wrapper-3725" for this suite. �[38;5;243m11/06/22 02:10:50.583�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [81.044 seconds]�[0m [sig-storage] EmptyDir wrapper volumes �[38;5;243mtest/e2e/storage/utils/framework.go:23�[0m should not cause race condition when used for configmaps [Serial] [Conformance] �[38;5;243mtest/e2e/storage/empty_dir_wrapper.go:189�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:09:29.572�[0m Nov 6 02:09:29.572: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename emptydir-wrapper �[38;5;243m11/06/22 02:09:29.574�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:09:29.662�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:09:29.716�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:31 [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 �[1mSTEP:�[0m Creating 50 configmaps �[38;5;243m11/06/22 02:09:29.772�[0m �[1mSTEP:�[0m Creating RC which spawns configmap-volume pods �[38;5;243m11/06/22 02:09:31.7�[0m Nov 6 02:09:31.803: INFO: Pod name wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea: Found 1 pods out of 5 Nov 6 02:09:36.844: INFO: Pod name wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea: Found 5 pods out of 5 �[1mSTEP:�[0m Ensuring each pod is running �[38;5;243m11/06/22 02:09:36.844�[0m Nov 6 02:09:36.844: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:09:36.890: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 45.902341ms Nov 6 02:09:38.922: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077676329s Nov 6 02:09:40.921: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0775304s Nov 6 02:09:42.920: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076622985s Nov 6 02:09:44.926: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082073862s Nov 6 02:09:46.928: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08440795s Nov 6 02:09:48.921: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077010985s Nov 6 02:09:50.922: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b": Phase="Running", Reason="", readiness=true. Elapsed: 14.07842979s Nov 6 02:09:50.922: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-7rn6b" satisfied condition "running" Nov 6 02:09:50.922: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-blllx" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:09:50.953: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-blllx": Phase="Running", Reason="", readiness=true. Elapsed: 30.641054ms Nov 6 02:09:50.953: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-blllx" satisfied condition "running" Nov 6 02:09:50.953: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-fl7rz" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:09:50.983: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-fl7rz": Phase="Running", Reason="", readiness=true. Elapsed: 30.3927ms Nov 6 02:09:50.984: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-fl7rz" satisfied condition "running" Nov 6 02:09:50.984: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-qzshl" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:09:51.014: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-qzshl": Phase="Running", Reason="", readiness=true. Elapsed: 30.22276ms Nov 6 02:09:51.014: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-qzshl" satisfied condition "running" Nov 6 02:09:51.014: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-vjzhl" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:09:51.044: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-vjzhl": Phase="Pending", Reason="", readiness=false. Elapsed: 30.313148ms Nov 6 02:09:53.076: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-vjzhl": Phase="Running", Reason="", readiness=true. Elapsed: 2.062204002s Nov 6 02:09:53.076: INFO: Pod "wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea-vjzhl" satisfied condition "running" �[1mSTEP:�[0m deleting ReplicationController wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea in namespace emptydir-wrapper-3725, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:09:53.076�[0m Nov 6 02:09:53.197: INFO: Deleting ReplicationController wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea took: 38.77705ms Nov 6 02:09:53.299: INFO: Terminating ReplicationController wrapped-volume-race-7fd7365c-9bd7-4792-a9ec-5e969a78ecea pods took: 101.287053ms �[1mSTEP:�[0m Creating RC which spawns configmap-volume pods �[38;5;243m11/06/22 02:09:57.13�[0m Nov 6 02:09:57.211: INFO: Pod name wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d: Found 1 pods out of 5 Nov 6 02:10:02.252: INFO: Pod name wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d: Found 5 pods out of 5 �[1mSTEP:�[0m Ensuring each pod is running �[38;5;243m11/06/22 02:10:02.252�[0m Nov 6 02:10:02.252: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:02.298: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 46.432456ms Nov 6 02:10:04.331: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078823261s Nov 6 02:10:06.329: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077528178s Nov 6 02:10:08.331: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078800665s Nov 6 02:10:10.331: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079327743s Nov 6 02:10:12.330: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.077606009s Nov 6 02:10:14.329: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077534636s Nov 6 02:10:16.331: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl": Phase="Running", Reason="", readiness=true. Elapsed: 14.079529831s Nov 6 02:10:16.332: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-d27sl" satisfied condition "running" Nov 6 02:10:16.332: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-jvz4j" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:16.366: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-jvz4j": Phase="Pending", Reason="", readiness=false. Elapsed: 34.022218ms Nov 6 02:10:18.397: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-jvz4j": Phase="Running", Reason="", readiness=true. Elapsed: 2.065256293s Nov 6 02:10:18.397: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-jvz4j" satisfied condition "running" Nov 6 02:10:18.397: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-lvvxj" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:18.427: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-lvvxj": Phase="Running", Reason="", readiness=true. Elapsed: 29.94084ms Nov 6 02:10:18.427: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-lvvxj" satisfied condition "running" Nov 6 02:10:18.427: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-nxjdm" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:18.460: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-nxjdm": Phase="Running", Reason="", readiness=true. Elapsed: 32.684146ms Nov 6 02:10:18.460: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-nxjdm" satisfied condition "running" Nov 6 02:10:18.460: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-xf5tb" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:18.490: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-xf5tb": Phase="Running", Reason="", readiness=true. Elapsed: 30.300517ms Nov 6 02:10:18.490: INFO: Pod "wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d-xf5tb" satisfied condition "running" �[1mSTEP:�[0m deleting ReplicationController wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d in namespace emptydir-wrapper-3725, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:10:18.49�[0m Nov 6 02:10:18.610: INFO: Deleting ReplicationController wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d took: 36.206094ms Nov 6 02:10:18.711: INFO: Terminating ReplicationController wrapped-volume-race-f639cc72-0660-45eb-911e-079d5ceedc5d pods took: 100.909534ms �[1mSTEP:�[0m Creating RC which spawns configmap-volume pods �[38;5;243m11/06/22 02:10:23.343�[0m Nov 6 02:10:23.420: INFO: Pod name wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b: Found 1 pods out of 5 Nov 6 02:10:28.459: INFO: Pod name wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b: Found 5 pods out of 5 �[1mSTEP:�[0m Ensuring each pod is running �[38;5;243m11/06/22 02:10:28.459�[0m Nov 6 02:10:28.460: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:28.507: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 47.830537ms Nov 6 02:10:30.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078986904s Nov 6 02:10:32.538: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078616171s Nov 6 02:10:34.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079162919s Nov 6 02:10:36.538: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078780545s Nov 6 02:10:38.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.079505853s Nov 6 02:10:40.540: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 12.080057875s Nov 6 02:10:42.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Pending", Reason="", readiness=false. Elapsed: 14.079839081s Nov 6 02:10:44.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p": Phase="Running", Reason="", readiness=true. Elapsed: 16.079251711s Nov 6 02:10:44.539: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-9z89p" satisfied condition "running" Nov 6 02:10:44.539: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-hjwgf" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:44.569: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-hjwgf": Phase="Running", Reason="", readiness=true. Elapsed: 29.946195ms Nov 6 02:10:44.569: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-hjwgf" satisfied condition "running" Nov 6 02:10:44.569: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-m9jmv" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:44.599: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-m9jmv": Phase="Running", Reason="", readiness=true. Elapsed: 29.843796ms Nov 6 02:10:44.599: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-m9jmv" satisfied condition "running" Nov 6 02:10:44.599: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-t6krs" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:44.629: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-t6krs": Phase="Running", Reason="", readiness=true. Elapsed: 29.738473ms Nov 6 02:10:44.629: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-t6krs" satisfied condition "running" Nov 6 02:10:44.629: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-wdrzq" in namespace "emptydir-wrapper-3725" to be "running" Nov 6 02:10:44.659: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-wdrzq": Phase="Running", Reason="", readiness=true. Elapsed: 30.109826ms Nov 6 02:10:44.659: INFO: Pod "wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b-wdrzq" satisfied condition "running" �[1mSTEP:�[0m deleting ReplicationController wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b in namespace emptydir-wrapper-3725, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:10:44.659�[0m Nov 6 02:10:44.780: INFO: Deleting ReplicationController wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b took: 40.582971ms Nov 6 02:10:44.980: INFO: Terminating ReplicationController wrapped-volume-race-ec6ec33f-095a-4edd-822b-542bea4b343b pods took: 200.223952ms �[1mSTEP:�[0m Cleaning up the configMaps �[38;5;243m11/06/22 02:10:48.881�[0m [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/node/init/init.go:32 Nov 6 02:10:50.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "emptydir-wrapper-3725" for this suite. �[38;5;243m11/06/22 02:10:50.583�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mReplicationController light�[0m �[1m[Slow] Should scale from 2 pods to 1 pod�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:103�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:10:50.617�[0m Nov 6 02:10:50.617: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 02:10:50.619�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:10:50.708�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:10:50.762�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] [Slow] Should scale from 2 pods to 1 pod test/e2e/autoscaling/horizontal_pod_autoscaling.go:103 Nov 6 02:10:50.817: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rc-light via /v1, Kind=ReplicationController with 2 replicas �[38;5;243m11/06/22 02:10:50.819�[0m �[1mSTEP:�[0m creating replication controller rc-light in namespace horizontal-pod-autoscaling-9641 �[38;5;243m11/06/22 02:10:50.865�[0m I1106 02:10:50.899486 14 runners.go:193] Created replication controller with name: rc-light, namespace: horizontal-pod-autoscaling-9641, replica count: 2 I1106 02:11:00.950226 14 runners.go:193] rc-light Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 02:11:00.95�[0m �[1mSTEP:�[0m creating replication controller rc-light-ctrl in namespace horizontal-pod-autoscaling-9641 �[38;5;243m11/06/22 02:11:00.995�[0m I1106 02:11:01.031294 14 runners.go:193] Created replication controller with name: rc-light-ctrl, namespace: horizontal-pod-autoscaling-9641, replica count: 1 I1106 02:11:11.082308 14 runners.go:193] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 02:11:16.083: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1 Nov 6 02:11:16.112: INFO: RC rc-light: consume 50 millicores in total Nov 6 02:11:16.112: INFO: RC rc-light: setting consumption to 50 millicores in total Nov 6 02:11:16.112: INFO: RC rc-light: consume 0 MB in total Nov 6 02:11:16.112: INFO: RC rc-light: disabling mem consumption Nov 6 02:11:16.112: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:11:16.112: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:11:16.112: INFO: RC rc-light: consume custom metric 0 in total Nov 6 02:11:16.113: INFO: RC rc-light: disabling consumption of custom metric QPS Nov 6 02:11:16.175: INFO: waiting for 1 replicas (current: 2) Nov 6 02:11:36.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:11:46.185: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:11:46.185: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:11:56.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:12:16.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:12:16.223: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:12:16.223: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:12:36.203: INFO: waiting for 1 replicas (current: 2) Nov 6 02:12:46.260: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:12:46.260: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:12:56.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:13:16.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:13:16.296: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:13:16.297: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:13:36.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:13:46.331: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:13:46.331: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:13:56.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:14:16.208: INFO: waiting for 1 replicas (current: 2) Nov 6 02:14:16.367: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:14:16.367: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:14:36.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:14:46.402: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:14:46.402: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:14:56.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:15:16.205: INFO: waiting for 1 replicas (current: 2) Nov 6 02:15:16.438: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:15:16.438: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:15:36.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:15:46.474: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:15:46.474: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:15:56.205: INFO: waiting for 1 replicas (current: 2) Nov 6 02:16:16.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:16:16.511: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:16:16.511: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:16:36.204: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc-light �[38;5;243m11/06/22 02:16:36.236�[0m Nov 6 02:16:36.236: INFO: RC rc-light: stopping metric consumer Nov 6 02:16:36.236: INFO: RC rc-light: stopping CPU consumer Nov 6 02:16:36.236: INFO: RC rc-light: stopping mem consumer �[1mSTEP:�[0m deleting ReplicationController rc-light in namespace horizontal-pod-autoscaling-9641, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:16:46.237�[0m Nov 6 02:16:46.348: INFO: Deleting ReplicationController rc-light took: 31.948704ms Nov 6 02:16:46.449: INFO: Terminating ReplicationController rc-light pods took: 100.558033ms �[1mSTEP:�[0m deleting ReplicationController rc-light-ctrl in namespace horizontal-pod-autoscaling-9641, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:16:48.621�[0m Nov 6 02:16:48.733: INFO: Deleting ReplicationController rc-light-ctrl took: 32.033447ms Nov 6 02:16:48.834: INFO: Terminating ReplicationController rc-light-ctrl pods took: 100.407574ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 6 02:16:51.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-9641" for this suite. �[38;5;243m11/06/22 02:16:51.034�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [360.451 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m ReplicationController light �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:88�[0m [Slow] Should scale from 2 pods to 1 pod �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:103�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:10:50.617�[0m Nov 6 02:10:50.617: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 02:10:50.619�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:10:50.708�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:10:50.762�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] [Slow] Should scale from 2 pods to 1 pod test/e2e/autoscaling/horizontal_pod_autoscaling.go:103 Nov 6 02:10:50.817: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rc-light via /v1, Kind=ReplicationController with 2 replicas �[38;5;243m11/06/22 02:10:50.819�[0m �[1mSTEP:�[0m creating replication controller rc-light in namespace horizontal-pod-autoscaling-9641 �[38;5;243m11/06/22 02:10:50.865�[0m I1106 02:10:50.899486 14 runners.go:193] Created replication controller with name: rc-light, namespace: horizontal-pod-autoscaling-9641, replica count: 2 I1106 02:11:00.950226 14 runners.go:193] rc-light Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 02:11:00.95�[0m �[1mSTEP:�[0m creating replication controller rc-light-ctrl in namespace horizontal-pod-autoscaling-9641 �[38;5;243m11/06/22 02:11:00.995�[0m I1106 02:11:01.031294 14 runners.go:193] Created replication controller with name: rc-light-ctrl, namespace: horizontal-pod-autoscaling-9641, replica count: 1 I1106 02:11:11.082308 14 runners.go:193] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 02:11:16.083: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1 Nov 6 02:11:16.112: INFO: RC rc-light: consume 50 millicores in total Nov 6 02:11:16.112: INFO: RC rc-light: setting consumption to 50 millicores in total Nov 6 02:11:16.112: INFO: RC rc-light: consume 0 MB in total Nov 6 02:11:16.112: INFO: RC rc-light: disabling mem consumption Nov 6 02:11:16.112: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:11:16.112: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:11:16.112: INFO: RC rc-light: consume custom metric 0 in total Nov 6 02:11:16.113: INFO: RC rc-light: disabling consumption of custom metric QPS Nov 6 02:11:16.175: INFO: waiting for 1 replicas (current: 2) Nov 6 02:11:36.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:11:46.185: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:11:46.185: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:11:56.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:12:16.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:12:16.223: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:12:16.223: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:12:36.203: INFO: waiting for 1 replicas (current: 2) Nov 6 02:12:46.260: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:12:46.260: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:12:56.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:13:16.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:13:16.296: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:13:16.297: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:13:36.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:13:46.331: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:13:46.331: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:13:56.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:14:16.208: INFO: waiting for 1 replicas (current: 2) Nov 6 02:14:16.367: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:14:16.367: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:14:36.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:14:46.402: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:14:46.402: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:14:56.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:15:16.205: INFO: waiting for 1 replicas (current: 2) Nov 6 02:15:16.438: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:15:16.438: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:15:36.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:15:46.474: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:15:46.474: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:15:56.205: INFO: waiting for 1 replicas (current: 2) Nov 6 02:16:16.204: INFO: waiting for 1 replicas (current: 2) Nov 6 02:16:16.511: INFO: RC rc-light: sending request to consume 50 millicores Nov 6 02:16:16.511: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9641/services/rc-light-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=50&requestSizeMillicores=100 } Nov 6 02:16:36.204: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rc-light �[38;5;243m11/06/22 02:16:36.236�[0m Nov 6 02:16:36.236: INFO: RC rc-light: stopping metric consumer Nov 6 02:16:36.236: INFO: RC rc-light: stopping CPU consumer Nov 6 02:16:36.236: INFO: RC rc-light: stopping mem consumer �[1mSTEP:�[0m deleting ReplicationController rc-light in namespace horizontal-pod-autoscaling-9641, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:16:46.237�[0m Nov 6 02:16:46.348: INFO: Deleting ReplicationController rc-light took: 31.948704ms Nov 6 02:16:46.449: INFO: Terminating ReplicationController rc-light pods took: 100.558033ms �[1mSTEP:�[0m deleting ReplicationController rc-light-ctrl in namespace horizontal-pod-autoscaling-9641, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:16:48.621�[0m Nov 6 02:16:48.733: INFO: Deleting ReplicationController rc-light-ctrl took: 32.033447ms Nov 6 02:16:48.834: INFO: Terminating ReplicationController rc-light-ctrl pods took: 100.407574ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 6 02:16:51.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-9641" for this suite. �[38;5;243m11/06/22 02:16:51.034�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicaSet�[0m �[1mShould scale from 5 pods to 3 pods and then from 3 pods to 1 pod�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:73�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:16:51.079�[0m Nov 6 02:16:51.079: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 02:16:51.08�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:16:51.17�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:16:51.224�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod test/e2e/autoscaling/horizontal_pod_autoscaling.go:73 Nov 6 02:16:51.279: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 5 replicas �[38;5;243m11/06/22 02:16:51.28�[0m �[1mSTEP:�[0m Creating replicaset rs in namespace horizontal-pod-autoscaling-786 �[38;5;243m11/06/22 02:16:51.32�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-786 �[38;5;243m11/06/22 02:16:51.32�[0m I1106 02:16:51.352488 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-786, replica count: 5 I1106 02:17:01.403647 14 runners.go:193] rs Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 02:17:01.403�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-786 �[38;5;243m11/06/22 02:17:01.447�[0m I1106 02:17:01.499902 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-786, replica count: 1 I1106 02:17:11.551875 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 02:17:16.552: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Nov 6 02:17:16.581: INFO: RC rs: consume 325 millicores in total Nov 6 02:17:16.581: INFO: RC rs: setting consumption to 325 millicores in total Nov 6 02:17:16.581: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:17:16.581: INFO: RC rs: consume 0 MB in total Nov 6 02:17:16.581: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:17:16.581: INFO: RC rs: disabling mem consumption Nov 6 02:17:16.581: INFO: RC rs: consume custom metric 0 in total Nov 6 02:17:16.581: INFO: RC rs: disabling consumption of custom metric QPS Nov 6 02:17:16.646: INFO: waiting for 3 replicas (current: 5) Nov 6 02:17:36.675: INFO: waiting for 3 replicas (current: 5) Nov 6 02:17:46.650: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:17:46.651: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:17:56.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:18:16.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:18:16.697: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:18:16.697: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:18:36.675: INFO: waiting for 3 replicas (current: 5) Nov 6 02:18:46.737: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:18:46.737: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:18:56.677: INFO: waiting for 3 replicas (current: 5) Nov 6 02:19:16.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:19:16.776: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:19:16.776: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:19:36.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:19:46.813: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:19:46.813: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:19:56.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:20:16.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:20:16.849: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:20:16.849: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:20:36.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:20:46.886: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:20:46.886: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:20:56.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:21:16.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:21:16.924: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:21:16.924: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:21:36.675: INFO: waiting for 3 replicas (current: 5) Nov 6 02:21:46.961: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:21:46.961: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:21:56.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:22:16.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:22:16.998: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:22:16.998: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:22:36.677: INFO: waiting for 3 replicas (current: 3) Nov 6 02:22:36.677: INFO: RC rs: consume 10 millicores in total Nov 6 02:22:36.677: INFO: RC rs: setting consumption to 10 millicores in total Nov 6 02:22:36.705: INFO: waiting for 1 replicas (current: 3) Nov 6 02:22:47.036: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:22:47.036: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:22:56.735: INFO: waiting for 1 replicas (current: 3) Nov 6 02:23:16.734: INFO: waiting for 1 replicas (current: 3) Nov 6 02:23:17.072: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:23:17.072: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:23:36.737: INFO: waiting for 1 replicas (current: 3) Nov 6 02:23:47.111: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:23:47.111: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:23:56.734: INFO: waiting for 1 replicas (current: 3) Nov 6 02:24:16.737: INFO: waiting for 1 replicas (current: 3) Nov 6 02:24:17.146: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:24:17.146: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:24:36.738: INFO: waiting for 1 replicas (current: 3) Nov 6 02:24:47.182: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:24:47.182: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:24:56.737: INFO: waiting for 1 replicas (current: 3) Nov 6 02:25:16.739: INFO: waiting for 1 replicas (current: 3) Nov 6 02:25:17.217: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:25:17.217: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:25:36.735: INFO: waiting for 1 replicas (current: 3) Nov 6 02:25:47.254: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:25:47.254: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:25:56.734: INFO: waiting for 1 replicas (current: 3) Nov 6 02:26:16.740: INFO: waiting for 1 replicas (current: 3) Nov 6 02:26:17.290: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:26:17.290: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:26:36.734: INFO: waiting for 1 replicas (current: 3) Nov 6 02:26:47.326: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:26:47.326: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:26:56.736: INFO: waiting for 1 replicas (current: 3) Nov 6 02:27:16.734: INFO: waiting for 1 replicas (current: 3) Nov 6 02:27:17.364: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:27:17.364: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:27:36.733: INFO: waiting for 1 replicas (current: 2) Nov 6 02:27:47.410: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:27:47.410: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:27:56.734: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m11/06/22 02:27:56.766�[0m Nov 6 02:27:56.766: INFO: RC rs: stopping metric consumer Nov 6 02:27:56.766: INFO: RC rs: stopping CPU consumer Nov 6 02:27:56.766: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-786, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:28:06.767�[0m Nov 6 02:28:06.880: INFO: Deleting ReplicaSet.apps rs took: 33.250693ms Nov 6 02:28:06.980: INFO: Terminating ReplicaSet.apps rs pods took: 100.648322ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-786, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:28:08.732�[0m Nov 6 02:28:08.843: INFO: Deleting ReplicationController rs-ctrl took: 31.768041ms Nov 6 02:28:08.944: INFO: Terminating ReplicationController rs-ctrl pods took: 100.715722ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 6 02:28:11.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-786" for this suite. �[38;5;243m11/06/22 02:28:11.348�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [680.303 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:69�[0m Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:73�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:16:51.079�[0m Nov 6 02:16:51.079: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 02:16:51.08�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:16:51.17�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:16:51.224�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod test/e2e/autoscaling/horizontal_pod_autoscaling.go:73 Nov 6 02:16:51.279: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 5 replicas �[38;5;243m11/06/22 02:16:51.28�[0m �[1mSTEP:�[0m Creating replicaset rs in namespace horizontal-pod-autoscaling-786 �[38;5;243m11/06/22 02:16:51.32�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-786 �[38;5;243m11/06/22 02:16:51.32�[0m I1106 02:16:51.352488 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-786, replica count: 5 I1106 02:17:01.403647 14 runners.go:193] rs Pods: 5 out of 5 created, 5 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 02:17:01.403�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-786 �[38;5;243m11/06/22 02:17:01.447�[0m I1106 02:17:01.499902 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-786, replica count: 1 I1106 02:17:11.551875 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 02:17:16.552: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Nov 6 02:17:16.581: INFO: RC rs: consume 325 millicores in total Nov 6 02:17:16.581: INFO: RC rs: setting consumption to 325 millicores in total Nov 6 02:17:16.581: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:17:16.581: INFO: RC rs: consume 0 MB in total Nov 6 02:17:16.581: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:17:16.581: INFO: RC rs: disabling mem consumption Nov 6 02:17:16.581: INFO: RC rs: consume custom metric 0 in total Nov 6 02:17:16.581: INFO: RC rs: disabling consumption of custom metric QPS Nov 6 02:17:16.646: INFO: waiting for 3 replicas (current: 5) Nov 6 02:17:36.675: INFO: waiting for 3 replicas (current: 5) Nov 6 02:17:46.650: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:17:46.651: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:17:56.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:18:16.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:18:16.697: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:18:16.697: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:18:36.675: INFO: waiting for 3 replicas (current: 5) Nov 6 02:18:46.737: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:18:46.737: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:18:56.677: INFO: waiting for 3 replicas (current: 5) Nov 6 02:19:16.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:19:16.776: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:19:16.776: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:19:36.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:19:46.813: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:19:46.813: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:19:56.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:20:16.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:20:16.849: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:20:16.849: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:20:36.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:20:46.886: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:20:46.886: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:20:56.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:21:16.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:21:16.924: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:21:16.924: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:21:36.675: INFO: waiting for 3 replicas (current: 5) Nov 6 02:21:46.961: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:21:46.961: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:21:56.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:22:16.676: INFO: waiting for 3 replicas (current: 5) Nov 6 02:22:16.998: INFO: RC rs: sending request to consume 325 millicores Nov 6 02:22:16.998: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 6 02:22:36.677: INFO: waiting for 3 replicas (current: 3) Nov 6 02:22:36.677: INFO: RC rs: consume 10 millicores in total Nov 6 02:22:36.677: INFO: RC rs: setting consumption to 10 millicores in total Nov 6 02:22:36.705: INFO: waiting for 1 replicas (current: 3) Nov 6 02:22:47.036: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:22:47.036: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:22:56.735: INFO: waiting for 1 replicas (current: 3) Nov 6 02:23:16.734: INFO: waiting for 1 replicas (current: 3) Nov 6 02:23:17.072: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:23:17.072: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:23:36.737: INFO: waiting for 1 replicas (current: 3) Nov 6 02:23:47.111: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:23:47.111: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:23:56.734: INFO: waiting for 1 replicas (current: 3) Nov 6 02:24:16.737: INFO: waiting for 1 replicas (current: 3) Nov 6 02:24:17.146: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:24:17.146: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:24:36.738: INFO: waiting for 1 replicas (current: 3) Nov 6 02:24:47.182: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:24:47.182: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:24:56.737: INFO: waiting for 1 replicas (current: 3) Nov 6 02:25:16.739: INFO: waiting for 1 replicas (current: 3) Nov 6 02:25:17.217: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:25:17.217: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:25:36.735: INFO: waiting for 1 replicas (current: 3) Nov 6 02:25:47.254: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:25:47.254: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:25:56.734: INFO: waiting for 1 replicas (current: 3) Nov 6 02:26:16.740: INFO: waiting for 1 replicas (current: 3) Nov 6 02:26:17.290: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:26:17.290: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:26:36.734: INFO: waiting for 1 replicas (current: 3) Nov 6 02:26:47.326: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:26:47.326: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:26:56.736: INFO: waiting for 1 replicas (current: 3) Nov 6 02:27:16.734: INFO: waiting for 1 replicas (current: 3) Nov 6 02:27:17.364: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:27:17.364: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:27:36.733: INFO: waiting for 1 replicas (current: 2) Nov 6 02:27:47.410: INFO: RC rs: sending request to consume 10 millicores Nov 6 02:27:47.410: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-786/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=10&requestSizeMillicores=100 } Nov 6 02:27:56.734: INFO: waiting for 1 replicas (current: 1) �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m11/06/22 02:27:56.766�[0m Nov 6 02:27:56.766: INFO: RC rs: stopping metric consumer Nov 6 02:27:56.766: INFO: RC rs: stopping CPU consumer Nov 6 02:27:56.766: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-786, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:28:06.767�[0m Nov 6 02:28:06.880: INFO: Deleting ReplicaSet.apps rs took: 33.250693ms Nov 6 02:28:06.980: INFO: Terminating ReplicaSet.apps rs pods took: 100.648322ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-786, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:28:08.732�[0m Nov 6 02:28:08.843: INFO: Deleting ReplicationController rs-ctrl took: 31.768041ms Nov 6 02:28:08.944: INFO: Terminating ReplicationController rs-ctrl pods took: 100.715722ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 6 02:28:11.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-786" for this suite. �[38;5;243m11/06/22 02:28:11.348�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] ControllerRevision [Serial]�[0m �[1mshould manage the lifecycle of a ControllerRevision [Conformance]�[0m �[38;5;243mtest/e2e/apps/controller_revision.go:124�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:28:11.385�[0m Nov 6 02:28:11.385: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename controllerrevisions �[38;5;243m11/06/22 02:28:11.387�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:28:11.477�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:28:11.532�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 �[1mSTEP:�[0m Creating DaemonSet "e2e-jqqsl-daemon-set" �[38;5;243m11/06/22 02:28:11.711�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/06/22 02:28:11.744�[0m Nov 6 02:28:11.783: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:11.816: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:11.816: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:28:12.848: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:12.877: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:12.877: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:28:13.846: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:13.875: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:13.875: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:28:14.847: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:14.875: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:14.875: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:28:15.847: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:15.876: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:15.876: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:28:16.849: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:16.877: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 2 Nov 6 02:28:16.877: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-jqqsl-daemon-set �[1mSTEP:�[0m Confirm DaemonSet "e2e-jqqsl-daemon-set" successfully created with "daemonset-name=e2e-jqqsl-daemon-set" label �[38;5;243m11/06/22 02:28:16.905�[0m �[1mSTEP:�[0m Listing all ControllerRevisions with label "daemonset-name=e2e-jqqsl-daemon-set" �[38;5;243m11/06/22 02:28:16.963�[0m Nov 6 02:28:16.992: INFO: Located ControllerRevision: "e2e-jqqsl-daemon-set-748d8f5b5f" �[1mSTEP:�[0m Patching ControllerRevision "e2e-jqqsl-daemon-set-748d8f5b5f" �[38;5;243m11/06/22 02:28:17.02�[0m Nov 6 02:28:17.055: INFO: e2e-jqqsl-daemon-set-748d8f5b5f has been patched �[1mSTEP:�[0m Create a new ControllerRevision �[38;5;243m11/06/22 02:28:17.055�[0m Nov 6 02:28:17.087: INFO: Created ControllerRevision: e2e-jqqsl-daemon-set-76f678b677 �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m11/06/22 02:28:17.087�[0m Nov 6 02:28:17.087: INFO: Requesting list of ControllerRevisions to confirm quantity Nov 6 02:28:17.115: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Deleting ControllerRevision "e2e-jqqsl-daemon-set-748d8f5b5f" �[38;5;243m11/06/22 02:28:17.115�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m11/06/22 02:28:17.153�[0m Nov 6 02:28:17.153: INFO: Requesting list of ControllerRevisions to confirm quantity Nov 6 02:28:17.182: INFO: Found 1 ControllerRevisions �[1mSTEP:�[0m Updating ControllerRevision "e2e-jqqsl-daemon-set-76f678b677" �[38;5;243m11/06/22 02:28:17.21�[0m Nov 6 02:28:17.274: INFO: e2e-jqqsl-daemon-set-76f678b677 has been updated �[1mSTEP:�[0m Generate another ControllerRevision by patching the Daemonset �[38;5;243m11/06/22 02:28:17.274�[0m W1106 02:28:17.307635 14 warnings.go:70] unknown field "updateStrategy" �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m11/06/22 02:28:17.307�[0m Nov 6 02:28:17.308: INFO: Requesting list of ControllerRevisions to confirm quantity Nov 6 02:28:17.340: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-jqqsl-daemon-set-76f678b677=updated" �[38;5;243m11/06/22 02:28:17.34�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m11/06/22 02:28:17.375�[0m Nov 6 02:28:17.375: INFO: Requesting list of ControllerRevisions to confirm quantity Nov 6 02:28:17.407: INFO: Found 1 ControllerRevisions Nov 6 02:28:17.436: INFO: ControllerRevision "e2e-jqqsl-daemon-set-6bd785b5d8" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 �[1mSTEP:�[0m Deleting DaemonSet "e2e-jqqsl-daemon-set" �[38;5;243m11/06/22 02:28:17.468�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions e2e-jqqsl-daemon-set in namespace controllerrevisions-4027, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:28:17.468�[0m Nov 6 02:28:17.578: INFO: Deleting DaemonSet.extensions e2e-jqqsl-daemon-set took: 31.422502ms Nov 6 02:28:17.679: INFO: Terminating DaemonSet.extensions e2e-jqqsl-daemon-set pods took: 101.086878ms Nov 6 02:28:22.508: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:22.508: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-jqqsl-daemon-set Nov 6 02:28:22.536: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"11365"},"items":null} Nov 6 02:28:22.564: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"11365"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 02:28:22.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "controllerrevisions-4027" for this suite. �[38;5;243m11/06/22 02:28:22.685�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [11.333 seconds]�[0m [sig-apps] ControllerRevision [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should manage the lifecycle of a ControllerRevision [Conformance] �[38;5;243mtest/e2e/apps/controller_revision.go:124�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:28:11.385�[0m Nov 6 02:28:11.385: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename controllerrevisions �[38;5;243m11/06/22 02:28:11.387�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:28:11.477�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:28:11.532�[0m [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 �[1mSTEP:�[0m Creating DaemonSet "e2e-jqqsl-daemon-set" �[38;5;243m11/06/22 02:28:11.711�[0m �[1mSTEP:�[0m Check that daemon pods launch on every node of the cluster. �[38;5;243m11/06/22 02:28:11.744�[0m Nov 6 02:28:11.783: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:11.816: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:11.816: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:28:12.848: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:12.877: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:12.877: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:28:13.846: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:13.875: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:13.875: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:28:14.847: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:14.875: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:14.875: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:28:15.847: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:15.876: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:15.876: INFO: Node capz-conf-6qqvv is running 0 daemon pod, expected 1 Nov 6 02:28:16.849: INFO: DaemonSet pods can't tolerate node capz-conf-gdu8bn-control-plane-tjg6t with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 6 02:28:16.877: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 2 Nov 6 02:28:16.877: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-jqqsl-daemon-set �[1mSTEP:�[0m Confirm DaemonSet "e2e-jqqsl-daemon-set" successfully created with "daemonset-name=e2e-jqqsl-daemon-set" label �[38;5;243m11/06/22 02:28:16.905�[0m �[1mSTEP:�[0m Listing all ControllerRevisions with label "daemonset-name=e2e-jqqsl-daemon-set" �[38;5;243m11/06/22 02:28:16.963�[0m Nov 6 02:28:16.992: INFO: Located ControllerRevision: "e2e-jqqsl-daemon-set-748d8f5b5f" �[1mSTEP:�[0m Patching ControllerRevision "e2e-jqqsl-daemon-set-748d8f5b5f" �[38;5;243m11/06/22 02:28:17.02�[0m Nov 6 02:28:17.055: INFO: e2e-jqqsl-daemon-set-748d8f5b5f has been patched �[1mSTEP:�[0m Create a new ControllerRevision �[38;5;243m11/06/22 02:28:17.055�[0m Nov 6 02:28:17.087: INFO: Created ControllerRevision: e2e-jqqsl-daemon-set-76f678b677 �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m11/06/22 02:28:17.087�[0m Nov 6 02:28:17.087: INFO: Requesting list of ControllerRevisions to confirm quantity Nov 6 02:28:17.115: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Deleting ControllerRevision "e2e-jqqsl-daemon-set-748d8f5b5f" �[38;5;243m11/06/22 02:28:17.115�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m11/06/22 02:28:17.153�[0m Nov 6 02:28:17.153: INFO: Requesting list of ControllerRevisions to confirm quantity Nov 6 02:28:17.182: INFO: Found 1 ControllerRevisions �[1mSTEP:�[0m Updating ControllerRevision "e2e-jqqsl-daemon-set-76f678b677" �[38;5;243m11/06/22 02:28:17.21�[0m Nov 6 02:28:17.274: INFO: e2e-jqqsl-daemon-set-76f678b677 has been updated �[1mSTEP:�[0m Generate another ControllerRevision by patching the Daemonset �[38;5;243m11/06/22 02:28:17.274�[0m W1106 02:28:17.307635 14 warnings.go:70] unknown field "updateStrategy" �[1mSTEP:�[0m Confirm that there are two ControllerRevisions �[38;5;243m11/06/22 02:28:17.307�[0m Nov 6 02:28:17.308: INFO: Requesting list of ControllerRevisions to confirm quantity Nov 6 02:28:17.340: INFO: Found 2 ControllerRevisions �[1mSTEP:�[0m Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-jqqsl-daemon-set-76f678b677=updated" �[38;5;243m11/06/22 02:28:17.34�[0m �[1mSTEP:�[0m Confirm that there is only one ControllerRevision �[38;5;243m11/06/22 02:28:17.375�[0m Nov 6 02:28:17.375: INFO: Requesting list of ControllerRevisions to confirm quantity Nov 6 02:28:17.407: INFO: Found 1 ControllerRevisions Nov 6 02:28:17.436: INFO: ControllerRevision "e2e-jqqsl-daemon-set-6bd785b5d8" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 �[1mSTEP:�[0m Deleting DaemonSet "e2e-jqqsl-daemon-set" �[38;5;243m11/06/22 02:28:17.468�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions e2e-jqqsl-daemon-set in namespace controllerrevisions-4027, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:28:17.468�[0m Nov 6 02:28:17.578: INFO: Deleting DaemonSet.extensions e2e-jqqsl-daemon-set took: 31.422502ms Nov 6 02:28:17.679: INFO: Terminating DaemonSet.extensions e2e-jqqsl-daemon-set pods took: 101.086878ms Nov 6 02:28:22.508: INFO: Number of nodes with available pods controlled by daemonset e2e-jqqsl-daemon-set: 0 Nov 6 02:28:22.508: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-jqqsl-daemon-set Nov 6 02:28:22.536: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"11365"},"items":null} Nov 6 02:28:22.564: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"11365"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 02:28:22.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "controllerrevisions-4027" for this suite. �[38;5;243m11/06/22 02:28:22.685�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith scale limited by number of Pods rate�[0m �[1mshould scale up no more than given number of Pods per minute�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:216�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:28:22.723�[0m Nov 6 02:28:22.724: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 02:28:22.725�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:28:22.813�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:28:22.868�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] should scale up no more than given number of Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:216 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/06/22 02:28:22.922�[0m Nov 6 02:28:22.922: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 02:28:22.923�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-6754 �[38;5;243m11/06/22 02:28:22.967�[0m I1106 02:28:23.005406 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-6754, replica count: 1 I1106 02:28:33.056304 14 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 02:28:33.056�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-6754 �[38;5;243m11/06/22 02:28:33.096�[0m I1106 02:28:33.133499 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-6754, replica count: 1 I1106 02:28:43.184966 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 02:28:48.185: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 6 02:28:48.215: INFO: RC consumer: consume 45 millicores in total Nov 6 02:28:48.215: INFO: RC consumer: setting consumption to 45 millicores in total Nov 6 02:28:48.215: INFO: RC consumer: sending request to consume 45 millicores Nov 6 02:28:48.215: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6754/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=45&requestSizeMillicores=100 } Nov 6 02:28:48.215: INFO: RC consumer: consume 0 MB in total Nov 6 02:28:48.215: INFO: RC consumer: consume custom metric 0 in total Nov 6 02:28:48.215: INFO: RC consumer: disabling consumption of custom metric QPS Nov 6 02:28:48.215: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m11/06/22 02:28:48.248�[0m Nov 6 02:28:48.248: INFO: RC consumer: consume 135 millicores in total Nov 6 02:28:48.282: INFO: RC consumer: setting consumption to 135 millicores in total Nov 6 02:28:48.310: INFO: waiting for 2 replicas (current: 1) Nov 6 02:29:08.340: INFO: waiting for 2 replicas (current: 1) Nov 6 02:29:18.282: INFO: RC consumer: sending request to consume 135 millicores Nov 6 02:29:18.282: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6754/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 6 02:29:28.340: INFO: waiting for 2 replicas (current: 2) Nov 6 02:29:28.370: INFO: waiting for 3 replicas (current: 2) Nov 6 02:29:48.405: INFO: waiting for 3 replicas (current: 2) Nov 6 02:29:48.567: INFO: RC consumer: sending request to consume 135 millicores Nov 6 02:29:48.567: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6754/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 6 02:30:08.399: INFO: waiting for 3 replicas (current: 2) Nov 6 02:30:18.605: INFO: RC consumer: sending request to consume 135 millicores Nov 6 02:30:18.605: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6754/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 6 02:30:28.400: INFO: waiting for 3 replicas (current: 3) �[1mSTEP:�[0m verifying time waited for a scale up to 2 replicas �[38;5;243m11/06/22 02:30:28.4�[0m �[1mSTEP:�[0m verifying time waited for a scale up to 3 replicas �[38;5;243m11/06/22 02:30:28.4�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/06/22 02:30:28.434�[0m Nov 6 02:30:28.435: INFO: RC consumer: stopping metric consumer Nov 6 02:30:28.435: INFO: RC consumer: stopping CPU consumer Nov 6 02:30:28.435: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-6754, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:30:38.435�[0m Nov 6 02:30:38.551: INFO: Deleting Deployment.apps consumer took: 34.222324ms Nov 6 02:30:38.651: INFO: Terminating Deployment.apps consumer pods took: 100.36659ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-6754, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:30:41.413�[0m Nov 6 02:30:41.524: INFO: Deleting ReplicationController consumer-ctrl took: 32.127542ms Nov 6 02:30:41.625: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.968616ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 6 02:30:43.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-6754" for this suite. �[38;5;243m11/06/22 02:30:43.625�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [140.935 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with scale limited by number of Pods rate �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:211�[0m should scale up no more than given number of Pods per minute �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:216�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:28:22.723�[0m Nov 6 02:28:22.724: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 02:28:22.725�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:28:22.813�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:28:22.868�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] should scale up no more than given number of Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:216 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/06/22 02:28:22.922�[0m Nov 6 02:28:22.922: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 02:28:22.923�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-6754 �[38;5;243m11/06/22 02:28:22.967�[0m I1106 02:28:23.005406 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-6754, replica count: 1 I1106 02:28:33.056304 14 runners.go:193] consumer Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 02:28:33.056�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-6754 �[38;5;243m11/06/22 02:28:33.096�[0m I1106 02:28:33.133499 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-6754, replica count: 1 I1106 02:28:43.184966 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 02:28:48.185: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 6 02:28:48.215: INFO: RC consumer: consume 45 millicores in total Nov 6 02:28:48.215: INFO: RC consumer: setting consumption to 45 millicores in total Nov 6 02:28:48.215: INFO: RC consumer: sending request to consume 45 millicores Nov 6 02:28:48.215: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6754/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=45&requestSizeMillicores=100 } Nov 6 02:28:48.215: INFO: RC consumer: consume 0 MB in total Nov 6 02:28:48.215: INFO: RC consumer: consume custom metric 0 in total Nov 6 02:28:48.215: INFO: RC consumer: disabling consumption of custom metric QPS Nov 6 02:28:48.215: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m11/06/22 02:28:48.248�[0m Nov 6 02:28:48.248: INFO: RC consumer: consume 135 millicores in total Nov 6 02:28:48.282: INFO: RC consumer: setting consumption to 135 millicores in total Nov 6 02:28:48.310: INFO: waiting for 2 replicas (current: 1) Nov 6 02:29:08.340: INFO: waiting for 2 replicas (current: 1) Nov 6 02:29:18.282: INFO: RC consumer: sending request to consume 135 millicores Nov 6 02:29:18.282: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6754/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 6 02:29:28.340: INFO: waiting for 2 replicas (current: 2) Nov 6 02:29:28.370: INFO: waiting for 3 replicas (current: 2) Nov 6 02:29:48.405: INFO: waiting for 3 replicas (current: 2) Nov 6 02:29:48.567: INFO: RC consumer: sending request to consume 135 millicores Nov 6 02:29:48.567: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6754/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 6 02:30:08.399: INFO: waiting for 3 replicas (current: 2) Nov 6 02:30:18.605: INFO: RC consumer: sending request to consume 135 millicores Nov 6 02:30:18.605: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6754/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 6 02:30:28.400: INFO: waiting for 3 replicas (current: 3) �[1mSTEP:�[0m verifying time waited for a scale up to 2 replicas �[38;5;243m11/06/22 02:30:28.4�[0m �[1mSTEP:�[0m verifying time waited for a scale up to 3 replicas �[38;5;243m11/06/22 02:30:28.4�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/06/22 02:30:28.434�[0m Nov 6 02:30:28.435: INFO: RC consumer: stopping metric consumer Nov 6 02:30:28.435: INFO: RC consumer: stopping CPU consumer Nov 6 02:30:28.435: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-6754, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:30:38.435�[0m Nov 6 02:30:38.551: INFO: Deleting Deployment.apps consumer took: 34.222324ms Nov 6 02:30:38.651: INFO: Terminating Deployment.apps consumer pods took: 100.36659ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-6754, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:30:41.413�[0m Nov 6 02:30:41.524: INFO: Deleting ReplicationController consumer-ctrl took: 32.127542ms Nov 6 02:30:41.625: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.968616ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 6 02:30:43.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-6754" for this suite. �[38;5;243m11/06/22 02:30:43.625�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] ReplicaSet�[0m �[1mShould scale from 1 pod to 3 pods and then from 3 pods to 5 pods�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:70�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:30:43.665�[0m Nov 6 02:30:43.666: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 02:30:43.667�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:30:43.759�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:30:43.814�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods test/e2e/autoscaling/horizontal_pod_autoscaling.go:70 Nov 6 02:30:43.869: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m11/06/22 02:30:43.87�[0m �[1mSTEP:�[0m Creating replicaset rs in namespace horizontal-pod-autoscaling-1322 �[38;5;243m11/06/22 02:30:43.919�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-1322 �[38;5;243m11/06/22 02:30:43.919�[0m I1106 02:30:43.951654 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-1322, replica count: 1 I1106 02:30:54.004872 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 02:30:54.005�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-1322 �[38;5;243m11/06/22 02:30:54.052�[0m I1106 02:30:54.086831 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-1322, replica count: 1 I1106 02:31:04.141575 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 02:31:09.142: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Nov 6 02:31:09.170: INFO: RC rs: consume 250 millicores in total Nov 6 02:31:09.170: INFO: RC rs: setting consumption to 250 millicores in total Nov 6 02:31:09.171: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:31:09.171: INFO: RC rs: consume 0 MB in total Nov 6 02:31:09.171: INFO: RC rs: disabling mem consumption Nov 6 02:31:09.171: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:31:09.171: INFO: RC rs: consume custom metric 0 in total Nov 6 02:31:09.171: INFO: RC rs: disabling consumption of custom metric QPS Nov 6 02:31:09.233: INFO: waiting for 3 replicas (current: 1) Nov 6 02:31:29.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:31:39.236: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:31:39.236: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:31:49.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:32:09.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:32:12.283: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:32:12.283: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:32:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:32:42.319: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:32:42.319: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:32:49.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:33:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:33:12.358: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:33:12.358: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:33:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:33:42.397: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:33:42.397: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:33:49.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:34:09.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:34:12.432: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:34:12.432: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:34:29.266: INFO: waiting for 3 replicas (current: 2) Nov 6 02:34:42.472: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:34:42.472: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:34:49.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:35:09.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:35:12.508: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:35:12.508: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:35:29.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:35:42.542: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:35:42.542: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:35:49.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:36:09.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:36:12.579: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:36:12.579: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:36:29.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:36:42.616: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:36:42.617: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:36:49.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:37:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:37:12.652: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:37:12.652: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:37:29.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:37:42.688: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:37:42.688: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:37:49.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:38:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:38:12.723: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:38:12.723: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:38:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:38:42.761: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:38:42.762: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:38:49.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:39:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:39:12.807: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:39:12.807: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:39:29.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:39:42.843: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:39:42.843: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:39:49.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:40:09.261: INFO: waiting for 3 replicas (current: 2) Nov 6 02:40:12.879: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:40:12.879: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:40:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:40:42.918: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:40:42.918: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:40:49.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:41:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:41:12.954: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:41:12.954: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:41:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:41:42.991: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:41:42.991: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:41:49.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:42:09.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:42:13.029: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:42:13.029: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:42:29.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:42:43.065: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:42:43.066: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:42:49.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:43:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:43:13.106: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:43:13.106: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:43:29.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:43:43.144: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:43:43.144: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:43:49.267: INFO: waiting for 3 replicas (current: 2) Nov 6 02:44:09.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:44:13.181: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:44:13.181: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:44:29.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:44:43.216: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:44:43.217: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:44:49.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:45:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:45:13.257: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:45:13.257: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:45:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:45:43.293: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:45:43.294: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:45:49.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:46:09.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:46:09.291: INFO: waiting for 3 replicas (current: 2) Nov 6 02:46:09.291: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205cd0>: { s: "timed out waiting for the condition", } Nov 6 02:46:09.291: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002dffe68, {0x74748d6?, 0xc002b28f00?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, 0xc000fcdc20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74748d6?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:71 +0x88 �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m11/06/22 02:46:09.326�[0m Nov 6 02:46:09.327: INFO: RC rs: stopping metric consumer Nov 6 02:46:09.327: INFO: RC rs: stopping CPU consumer Nov 6 02:46:09.327: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-1322, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:46:19.329�[0m Nov 6 02:46:19.442: INFO: Deleting ReplicaSet.apps rs took: 32.461946ms Nov 6 02:46:19.542: INFO: Terminating ReplicaSet.apps rs pods took: 100.751883ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-1322, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:46:22.012�[0m Nov 6 02:46:22.125: INFO: Deleting ReplicationController rs-ctrl took: 34.251286ms Nov 6 02:46:22.226: INFO: Terminating ReplicationController rs-ctrl pods took: 101.13934ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 6 02:46:23.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/06/22 02:46:24.024�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-1322". �[38;5;243m11/06/22 02:46:24.025�[0m �[1mSTEP:�[0m Found 19 events. �[38;5;243m11/06/22 02:46:24.054�[0m Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:43 +0000 UTC - event for rs: {replicaset-controller } SuccessfulCreate: Created pod: rs-nbpdb Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:43 +0000 UTC - event for rs-nbpdb: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1322/rs-nbpdb to capz-conf-6qqvv Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:46 +0000 UTC - event for rs-nbpdb: {kubelet capz-conf-6qqvv} Created: Created container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:46 +0000 UTC - event for rs-nbpdb: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:48 +0000 UTC - event for rs-nbpdb: {kubelet capz-conf-6qqvv} Started: Started container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:54 +0000 UTC - event for rs-ctrl: {replication-controller } SuccessfulCreate: Created pod: rs-ctrl-pb7lg Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:54 +0000 UTC - event for rs-ctrl-pb7lg: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1322/rs-ctrl-pb7lg to capz-conf-ppc2q Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:56 +0000 UTC - event for rs-ctrl-pb7lg: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:56 +0000 UTC - event for rs-ctrl-pb7lg: {kubelet capz-conf-ppc2q} Created: Created container rs-ctrl Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:57 +0000 UTC - event for rs-ctrl-pb7lg: {kubelet capz-conf-ppc2q} Started: Started container rs-ctrl Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:24 +0000 UTC - event for rs: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:24 +0000 UTC - event for rs: {replicaset-controller } SuccessfulCreate: Created pod: rs-77gvd Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:24 +0000 UTC - event for rs-77gvd: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1322/rs-77gvd to capz-conf-ppc2q Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:26 +0000 UTC - event for rs-77gvd: {kubelet capz-conf-ppc2q} Created: Created container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:26 +0000 UTC - event for rs-77gvd: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:28 +0000 UTC - event for rs-77gvd: {kubelet capz-conf-ppc2q} Started: Started container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:46:19 +0000 UTC - event for rs-77gvd: {kubelet capz-conf-ppc2q} Killing: Stopping container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:46:19 +0000 UTC - event for rs-nbpdb: {kubelet capz-conf-6qqvv} Killing: Stopping container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:46:22 +0000 UTC - event for rs-ctrl-pb7lg: {kubelet capz-conf-ppc2q} Killing: Stopping container rs-ctrl Nov 6 02:46:24.082: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 02:46:24.082: INFO: Nov 6 02:46:24.116: INFO: Logging node info for node capz-conf-6qqvv Nov 6 02:46:24.145: INFO: Node Info: &Node{ObjectMeta:{capz-conf-6qqvv 21ca7817-6572-4b5d-812e-ce0eb0d5f68a 13106 0 2022-11-06 01:05:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-6qqvv kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-qszhv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.43.193 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:58:86:f4 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 02:45:00 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-6qqvv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-6qqvv,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-6qqvv,SystemUUID:4FBA08C6-3CF7-43A9-B47F-5DD6399E03F4,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:46:24.145: INFO: Logging kubelet events for node capz-conf-6qqvv Nov 6 02:46:24.173: INFO: Logging pods the kubelet thinks is on node capz-conf-6qqvv Nov 6 02:46:24.217: INFO: calico-node-windows-wq7jf started at 2022-11-06 01:05:13 +0000 UTC (1+2 container statuses recorded) Nov 6 02:46:24.217: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:46:24.217: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 02:46:24.217: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 02:46:24.217: INFO: containerd-logger-4c4v9 started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.217: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 02:46:24.217: INFO: csi-proxy-d7klv started at 2022-11-06 01:05:43 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.217: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 02:46:24.217: INFO: kube-proxy-windows-mg9dn started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.217: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:46:24.380: INFO: Latency metrics for node capz-conf-6qqvv Nov 6 02:46:24.380: INFO: Logging node info for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:46:24.411: INFO: Node Info: &Node{ObjectMeta:{capz-conf-gdu8bn-control-plane-tjg6t 1b062db8-a1d5-4d72-b97f-3f553f9a80bc 13265 0 2022-11-06 01:02:47 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-gdu8bn-control-plane-tjg6t kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-control-plane-r9dv5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.255.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-06 01:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-06 01:02:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-06 01:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-06 01:03:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-06 01:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-06 02:46:23 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-gdu8bn-control-plane-tjg6t,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-06 01:03:36 +0000 UTC,LastTransitionTime:2022-11-06 01:03:36 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:46:23 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:46:23 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:46:23 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:46:23 +0000 UTC,LastTransitionTime:2022-11-06 01:03:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-gdu8bn-control-plane-tjg6t,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78d5859e57514e33b16c735e58b1e9ed,SystemUUID:000037f3-aea5-d84d-b6e2-269548336f74,BootID:2d661860-3c1f-4907-aa6c-ac6c2ce1dffc,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:132977107,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:120025913,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:66202310,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:53027640,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:46:24.411: INFO: Logging kubelet events for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:46:24.442: INFO: Logging pods the kubelet thinks is on node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:46:24.499: INFO: etcd-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container etcd ready: true, restart count 0 Nov 6 02:46:24.499: INFO: kube-apiserver-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 02:46:24.499: INFO: calico-node-4tbpv started at 2022-11-06 01:03:13 +0000 UTC (2+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 6 02:46:24.499: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:46:24.499: INFO: Container calico-node ready: true, restart count 0 Nov 6 02:46:24.499: INFO: calico-kube-controllers-56c5ff4bf8-c9gck started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 6 02:46:24.499: INFO: metrics-server-954b56d74-tp2lc started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container metrics-server ready: true, restart count 0 Nov 6 02:46:24.499: INFO: coredns-64475449fc-jxwjm started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container coredns ready: true, restart count 0 Nov 6 02:46:24.499: INFO: kube-scheduler-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:54 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container kube-scheduler ready: true, restart count 0 Nov 6 02:46:24.499: INFO: kube-proxy-gv5gt started at 2022-11-06 01:02:55 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:46:24.499: INFO: coredns-64475449fc-9kgrz started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container coredns ready: true, restart count 0 Nov 6 02:46:24.499: INFO: kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 6 02:46:24.643: INFO: Latency metrics for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:46:24.643: INFO: Logging node info for node capz-conf-ppc2q Nov 6 02:46:24.672: INFO: Node Info: &Node{ObjectMeta:{capz-conf-ppc2q 0e9bff17-74db-40c4-85fd-565404c5c796 13108 0 2022-11-06 01:05:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-ppc2q kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-swkgv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.41.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:f9:7f:62 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 02:45:00 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-ppc2q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-ppc2q,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-ppc2q,SystemUUID:D6A1F803-1C65-4D68-BCD7-387A75C6EDBD,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:46:24.672: INFO: Logging kubelet events for node capz-conf-ppc2q Nov 6 02:46:24.700: INFO: Logging pods the kubelet thinks is on node capz-conf-ppc2q Nov 6 02:46:24.745: INFO: kube-proxy-windows-vmt8g started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.745: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:46:24.745: INFO: calico-node-windows-hsdvh started at 2022-11-06 01:05:08 +0000 UTC (1+2 container statuses recorded) Nov 6 02:46:24.745: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:46:24.745: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 02:46:24.745: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 02:46:24.745: INFO: csi-proxy-vqp4q started at 2022-11-06 01:05:39 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.745: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 02:46:24.745: INFO: containerd-logger-s25tr started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.745: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 02:46:24.884: INFO: Latency metrics for node capz-conf-ppc2q [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1322" for this suite. �[38;5;243m11/06/22 02:46:24.884�[0m �[38;5;243m------------------------------�[0m �[38;5;9m• [FAILED] [941.255 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] ReplicaSet �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:69�[0m �[38;5;9m�[1m[It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:70�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:30:43.665�[0m Nov 6 02:30:43.666: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 02:30:43.667�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:30:43.759�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:30:43.814�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods test/e2e/autoscaling/horizontal_pod_autoscaling.go:70 Nov 6 02:30:43.869: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m11/06/22 02:30:43.87�[0m �[1mSTEP:�[0m Creating replicaset rs in namespace horizontal-pod-autoscaling-1322 �[38;5;243m11/06/22 02:30:43.919�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-1322 �[38;5;243m11/06/22 02:30:43.919�[0m I1106 02:30:43.951654 14 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-1322, replica count: 1 I1106 02:30:54.004872 14 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 02:30:54.005�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-1322 �[38;5;243m11/06/22 02:30:54.052�[0m I1106 02:30:54.086831 14 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-1322, replica count: 1 I1106 02:31:04.141575 14 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 02:31:09.142: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Nov 6 02:31:09.170: INFO: RC rs: consume 250 millicores in total Nov 6 02:31:09.170: INFO: RC rs: setting consumption to 250 millicores in total Nov 6 02:31:09.171: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:31:09.171: INFO: RC rs: consume 0 MB in total Nov 6 02:31:09.171: INFO: RC rs: disabling mem consumption Nov 6 02:31:09.171: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:31:09.171: INFO: RC rs: consume custom metric 0 in total Nov 6 02:31:09.171: INFO: RC rs: disabling consumption of custom metric QPS Nov 6 02:31:09.233: INFO: waiting for 3 replicas (current: 1) Nov 6 02:31:29.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:31:39.236: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:31:39.236: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:31:49.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:32:09.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:32:12.283: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:32:12.283: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:32:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:32:42.319: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:32:42.319: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:32:49.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:33:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:33:12.358: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:33:12.358: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:33:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:33:42.397: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:33:42.397: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:33:49.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:34:09.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:34:12.432: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:34:12.432: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:34:29.266: INFO: waiting for 3 replicas (current: 2) Nov 6 02:34:42.472: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:34:42.472: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:34:49.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:35:09.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:35:12.508: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:35:12.508: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:35:29.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:35:42.542: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:35:42.542: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:35:49.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:36:09.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:36:12.579: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:36:12.579: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:36:29.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:36:42.616: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:36:42.617: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:36:49.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:37:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:37:12.652: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:37:12.652: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:37:29.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:37:42.688: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:37:42.688: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:37:49.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:38:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:38:12.723: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:38:12.723: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:38:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:38:42.761: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:38:42.762: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:38:49.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:39:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:39:12.807: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:39:12.807: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:39:29.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:39:42.843: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:39:42.843: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:39:49.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:40:09.261: INFO: waiting for 3 replicas (current: 2) Nov 6 02:40:12.879: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:40:12.879: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:40:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:40:42.918: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:40:42.918: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:40:49.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:41:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:41:12.954: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:41:12.954: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:41:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:41:42.991: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:41:42.991: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:41:49.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:42:09.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:42:13.029: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:42:13.029: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:42:29.264: INFO: waiting for 3 replicas (current: 2) Nov 6 02:42:43.065: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:42:43.066: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:42:49.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:43:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:43:13.106: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:43:13.106: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:43:29.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:43:43.144: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:43:43.144: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:43:49.267: INFO: waiting for 3 replicas (current: 2) Nov 6 02:44:09.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:44:13.181: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:44:13.181: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:44:29.265: INFO: waiting for 3 replicas (current: 2) Nov 6 02:44:43.216: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:44:43.217: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:44:49.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:45:09.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:45:13.257: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:45:13.257: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:45:29.263: INFO: waiting for 3 replicas (current: 2) Nov 6 02:45:43.293: INFO: RC rs: sending request to consume 250 millicores Nov 6 02:45:43.294: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1322/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 02:45:49.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:46:09.262: INFO: waiting for 3 replicas (current: 2) Nov 6 02:46:09.291: INFO: waiting for 3 replicas (current: 2) Nov 6 02:46:09.291: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205cd0>: { s: "timed out waiting for the condition", } Nov 6 02:46:09.291: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002dffe68, {0x74748d6?, 0xc002b28f00?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, 0xc000fcdc20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74748d6?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:71 +0x88 �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m11/06/22 02:46:09.326�[0m Nov 6 02:46:09.327: INFO: RC rs: stopping metric consumer Nov 6 02:46:09.327: INFO: RC rs: stopping CPU consumer Nov 6 02:46:09.327: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-1322, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:46:19.329�[0m Nov 6 02:46:19.442: INFO: Deleting ReplicaSet.apps rs took: 32.461946ms Nov 6 02:46:19.542: INFO: Terminating ReplicaSet.apps rs pods took: 100.751883ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-1322, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:46:22.012�[0m Nov 6 02:46:22.125: INFO: Deleting ReplicationController rs-ctrl took: 34.251286ms Nov 6 02:46:22.226: INFO: Terminating ReplicationController rs-ctrl pods took: 101.13934ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 6 02:46:23.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/06/22 02:46:24.024�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-1322". �[38;5;243m11/06/22 02:46:24.025�[0m �[1mSTEP:�[0m Found 19 events. �[38;5;243m11/06/22 02:46:24.054�[0m Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:43 +0000 UTC - event for rs: {replicaset-controller } SuccessfulCreate: Created pod: rs-nbpdb Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:43 +0000 UTC - event for rs-nbpdb: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1322/rs-nbpdb to capz-conf-6qqvv Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:46 +0000 UTC - event for rs-nbpdb: {kubelet capz-conf-6qqvv} Created: Created container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:46 +0000 UTC - event for rs-nbpdb: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:48 +0000 UTC - event for rs-nbpdb: {kubelet capz-conf-6qqvv} Started: Started container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:54 +0000 UTC - event for rs-ctrl: {replication-controller } SuccessfulCreate: Created pod: rs-ctrl-pb7lg Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:54 +0000 UTC - event for rs-ctrl-pb7lg: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1322/rs-ctrl-pb7lg to capz-conf-ppc2q Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:56 +0000 UTC - event for rs-ctrl-pb7lg: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:56 +0000 UTC - event for rs-ctrl-pb7lg: {kubelet capz-conf-ppc2q} Created: Created container rs-ctrl Nov 6 02:46:24.054: INFO: At 2022-11-06 02:30:57 +0000 UTC - event for rs-ctrl-pb7lg: {kubelet capz-conf-ppc2q} Started: Started container rs-ctrl Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:24 +0000 UTC - event for rs: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:24 +0000 UTC - event for rs: {replicaset-controller } SuccessfulCreate: Created pod: rs-77gvd Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:24 +0000 UTC - event for rs-77gvd: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1322/rs-77gvd to capz-conf-ppc2q Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:26 +0000 UTC - event for rs-77gvd: {kubelet capz-conf-ppc2q} Created: Created container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:26 +0000 UTC - event for rs-77gvd: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 02:46:24.054: INFO: At 2022-11-06 02:31:28 +0000 UTC - event for rs-77gvd: {kubelet capz-conf-ppc2q} Started: Started container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:46:19 +0000 UTC - event for rs-77gvd: {kubelet capz-conf-ppc2q} Killing: Stopping container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:46:19 +0000 UTC - event for rs-nbpdb: {kubelet capz-conf-6qqvv} Killing: Stopping container rs Nov 6 02:46:24.054: INFO: At 2022-11-06 02:46:22 +0000 UTC - event for rs-ctrl-pb7lg: {kubelet capz-conf-ppc2q} Killing: Stopping container rs-ctrl Nov 6 02:46:24.082: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 02:46:24.082: INFO: Nov 6 02:46:24.116: INFO: Logging node info for node capz-conf-6qqvv Nov 6 02:46:24.145: INFO: Node Info: &Node{ObjectMeta:{capz-conf-6qqvv 21ca7817-6572-4b5d-812e-ce0eb0d5f68a 13106 0 2022-11-06 01:05:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-6qqvv kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-qszhv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.43.193 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:58:86:f4 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 02:45:00 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-6qqvv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-6qqvv,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-6qqvv,SystemUUID:4FBA08C6-3CF7-43A9-B47F-5DD6399E03F4,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:46:24.145: INFO: Logging kubelet events for node capz-conf-6qqvv Nov 6 02:46:24.173: INFO: Logging pods the kubelet thinks is on node capz-conf-6qqvv Nov 6 02:46:24.217: INFO: calico-node-windows-wq7jf started at 2022-11-06 01:05:13 +0000 UTC (1+2 container statuses recorded) Nov 6 02:46:24.217: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:46:24.217: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 02:46:24.217: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 02:46:24.217: INFO: containerd-logger-4c4v9 started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.217: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 02:46:24.217: INFO: csi-proxy-d7klv started at 2022-11-06 01:05:43 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.217: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 02:46:24.217: INFO: kube-proxy-windows-mg9dn started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.217: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:46:24.380: INFO: Latency metrics for node capz-conf-6qqvv Nov 6 02:46:24.380: INFO: Logging node info for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:46:24.411: INFO: Node Info: &Node{ObjectMeta:{capz-conf-gdu8bn-control-plane-tjg6t 1b062db8-a1d5-4d72-b97f-3f553f9a80bc 13265 0 2022-11-06 01:02:47 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-gdu8bn-control-plane-tjg6t kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-control-plane-r9dv5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.255.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-06 01:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-06 01:02:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-06 01:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-06 01:03:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-06 01:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-06 02:46:23 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-gdu8bn-control-plane-tjg6t,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-06 01:03:36 +0000 UTC,LastTransitionTime:2022-11-06 01:03:36 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:46:23 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:46:23 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:46:23 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:46:23 +0000 UTC,LastTransitionTime:2022-11-06 01:03:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-gdu8bn-control-plane-tjg6t,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78d5859e57514e33b16c735e58b1e9ed,SystemUUID:000037f3-aea5-d84d-b6e2-269548336f74,BootID:2d661860-3c1f-4907-aa6c-ac6c2ce1dffc,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:132977107,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:120025913,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:66202310,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:53027640,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:46:24.411: INFO: Logging kubelet events for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:46:24.442: INFO: Logging pods the kubelet thinks is on node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:46:24.499: INFO: etcd-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container etcd ready: true, restart count 0 Nov 6 02:46:24.499: INFO: kube-apiserver-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 02:46:24.499: INFO: calico-node-4tbpv started at 2022-11-06 01:03:13 +0000 UTC (2+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 6 02:46:24.499: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:46:24.499: INFO: Container calico-node ready: true, restart count 0 Nov 6 02:46:24.499: INFO: calico-kube-controllers-56c5ff4bf8-c9gck started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 6 02:46:24.499: INFO: metrics-server-954b56d74-tp2lc started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container metrics-server ready: true, restart count 0 Nov 6 02:46:24.499: INFO: coredns-64475449fc-jxwjm started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container coredns ready: true, restart count 0 Nov 6 02:46:24.499: INFO: kube-scheduler-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:54 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container kube-scheduler ready: true, restart count 0 Nov 6 02:46:24.499: INFO: kube-proxy-gv5gt started at 2022-11-06 01:02:55 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:46:24.499: INFO: coredns-64475449fc-9kgrz started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container coredns ready: true, restart count 0 Nov 6 02:46:24.499: INFO: kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.499: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 6 02:46:24.643: INFO: Latency metrics for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 02:46:24.643: INFO: Logging node info for node capz-conf-ppc2q Nov 6 02:46:24.672: INFO: Node Info: &Node{ObjectMeta:{capz-conf-ppc2q 0e9bff17-74db-40c4-85fd-565404c5c796 13108 0 2022-11-06 01:05:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-ppc2q kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-swkgv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.41.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:f9:7f:62 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 02:45:00 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-ppc2q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 02:45:00 +0000 UTC,LastTransitionTime:2022-11-06 01:05:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-ppc2q,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-ppc2q,SystemUUID:D6A1F803-1C65-4D68-BCD7-387A75C6EDBD,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:46:24.672: INFO: Logging kubelet events for node capz-conf-ppc2q Nov 6 02:46:24.700: INFO: Logging pods the kubelet thinks is on node capz-conf-ppc2q Nov 6 02:46:24.745: INFO: kube-proxy-windows-vmt8g started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.745: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 02:46:24.745: INFO: calico-node-windows-hsdvh started at 2022-11-06 01:05:08 +0000 UTC (1+2 container statuses recorded) Nov 6 02:46:24.745: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:46:24.745: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 02:46:24.745: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 02:46:24.745: INFO: csi-proxy-vqp4q started at 2022-11-06 01:05:39 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.745: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 02:46:24.745: INFO: containerd-logger-s25tr started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 02:46:24.745: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 02:46:24.884: INFO: Latency metrics for node capz-conf-ppc2q [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1322" for this suite. �[38;5;243m11/06/22 02:46:24.884�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;9mNov 6 02:46:09.291: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition�[0m �[38;5;9mIn �[1m[It]�[0m�[38;5;9m at: �[1mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:209�[0m �[38;5;9mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002dffe68, {0x74748d6?, 0xc002b28f00?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, 0xc000fcdc20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74748d6?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:71 +0x88 �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould run and stop complex daemon [Conformance]�[0m �[38;5;243mtest/e2e/apps/daemon_set.go:194�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:46:24.924�[0m Nov 6 02:46:24.924: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/06/22 02:46:24.926�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:46:25.018�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:46:25.073�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:194 Nov 6 02:46:25.252: INFO: Creating daemon "daemon-set" with a node selector �[1mSTEP:�[0m Initially, daemon pods should not be running on any nodes. �[38;5;243m11/06/22 02:46:25.284�[0m Nov 6 02:46:25.313: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:25.313: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP:�[0m Change node label to blue, check that daemon pod is launched. �[38;5;243m11/06/22 02:46:25.313�[0m Nov 6 02:46:25.444: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:25.444: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:26.474: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:26.474: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:27.475: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:27.475: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:28.474: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:28.474: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:29.474: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:29.474: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:30.473: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:46:30.473: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set �[1mSTEP:�[0m Update the node label to green, and wait for daemons to be unscheduled �[38;5;243m11/06/22 02:46:30.501�[0m Nov 6 02:46:30.605: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:30.605: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP:�[0m Update DaemonSet node selector to green, and change its update strategy to RollingUpdate �[38;5;243m11/06/22 02:46:30.605�[0m Nov 6 02:46:30.674: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:30.674: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:31.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:31.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:32.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:32.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:33.704: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:33.704: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:34.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:34.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:35.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:35.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:36.704: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:36.704: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:37.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:37.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:38.705: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:38.705: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:39.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:39.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:40.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:46:40.703: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m11/06/22 02:46:40.76�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-3492, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:46:40.76�[0m Nov 6 02:46:40.872: INFO: Deleting DaemonSet.extensions daemon-set took: 32.770006ms Nov 6 02:46:40.972: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.452678ms Nov 6 02:46:46.201: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:46.201: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Nov 6 02:46:46.230: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"13396"},"items":null} Nov 6 02:46:46.262: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"13397"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 02:46:46.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-3492" for this suite. �[38;5;243m11/06/22 02:46:46.417�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [21.525 seconds]�[0m [sig-apps] Daemon set [Serial] �[38;5;243mtest/e2e/apps/framework.go:23�[0m should run and stop complex daemon [Conformance] �[38;5;243mtest/e2e/apps/daemon_set.go:194�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:46:24.924�[0m Nov 6 02:46:24.924: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename daemonsets �[38;5;243m11/06/22 02:46:24.926�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:46:25.018�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:46:25.073�[0m [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:194 Nov 6 02:46:25.252: INFO: Creating daemon "daemon-set" with a node selector �[1mSTEP:�[0m Initially, daemon pods should not be running on any nodes. �[38;5;243m11/06/22 02:46:25.284�[0m Nov 6 02:46:25.313: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:25.313: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP:�[0m Change node label to blue, check that daemon pod is launched. �[38;5;243m11/06/22 02:46:25.313�[0m Nov 6 02:46:25.444: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:25.444: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:26.474: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:26.474: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:27.475: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:27.475: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:28.474: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:28.474: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:29.474: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:29.474: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:30.473: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:46:30.473: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set �[1mSTEP:�[0m Update the node label to green, and wait for daemons to be unscheduled �[38;5;243m11/06/22 02:46:30.501�[0m Nov 6 02:46:30.605: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:30.605: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set �[1mSTEP:�[0m Update DaemonSet node selector to green, and change its update strategy to RollingUpdate �[38;5;243m11/06/22 02:46:30.605�[0m Nov 6 02:46:30.674: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:30.674: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:31.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:31.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:32.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:32.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:33.704: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:33.704: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:34.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:34.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:35.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:35.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:36.704: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:36.704: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:37.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:37.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:38.705: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:38.705: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:39.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:39.703: INFO: Node capz-conf-ppc2q is running 0 daemon pod, expected 1 Nov 6 02:46:40.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 6 02:46:40.703: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 �[1mSTEP:�[0m Deleting DaemonSet "daemon-set" �[38;5;243m11/06/22 02:46:40.76�[0m �[1mSTEP:�[0m deleting DaemonSet.extensions daemon-set in namespace daemonsets-3492, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 02:46:40.76�[0m Nov 6 02:46:40.872: INFO: Deleting DaemonSet.extensions daemon-set took: 32.770006ms Nov 6 02:46:40.972: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.452678ms Nov 6 02:46:46.201: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 6 02:46:46.201: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Nov 6 02:46:46.230: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"13396"},"items":null} Nov 6 02:46:46.262: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"13397"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Nov 6 02:46:46.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "daemonsets-3492" for this suite. �[38;5;243m11/06/22 02:46:46.417�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) �[38;5;243m[Serial] [Slow] Deployment (Pod Resource)�[0m �[1mShould scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:157�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:46:46.452�[0m Nov 6 02:46:46.452: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 02:46:46.453�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:46:46.549�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:46:46.604�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:157 Nov 6 02:46:46.659: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 02:46:46.66�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-1019 �[38;5;243m11/06/22 02:46:46.701�[0m I1106 02:46:46.736082 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-1019, replica count: 1 I1106 02:46:56.787278 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 02:46:56.787�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-1019 �[38;5;243m11/06/22 02:46:56.826�[0m I1106 02:46:56.862714 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-1019, replica count: 1 I1106 02:47:06.914333 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 02:47:11.915: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 6 02:47:11.943: INFO: RC test-deployment: consume 0 millicores in total Nov 6 02:47:11.943: INFO: RC test-deployment: disabling CPU consumption Nov 6 02:47:11.943: INFO: RC test-deployment: consume 250 MB in total Nov 6 02:47:11.943: INFO: RC test-deployment: setting consumption to 250 MB in total Nov 6 02:47:11.943: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:47:11.943: INFO: RC test-deployment: consume custom metric 0 in total Nov 6 02:47:11.943: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 6 02:47:11.943: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:47:12.005: INFO: waiting for 3 replicas (current: 1) Nov 6 02:47:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:47:41.997: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:47:41.997: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:47:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:48:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:48:15.032: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:48:15.033: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:48:32.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:48:45.071: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:48:45.073: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:48:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:49:12.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:49:15.110: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:49:15.110: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:49:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:49:45.146: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:49:45.147: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:49:52.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:50:12.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:50:15.182: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:50:15.183: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:50:32.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:50:45.220: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:50:45.220: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:50:52.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:51:12.038: INFO: waiting for 3 replicas (current: 2) Nov 6 02:51:15.257: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:51:15.257: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:51:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:51:45.293: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:51:45.293: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:51:52.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:52:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:52:15.332: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:52:15.332: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:52:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:52:45.370: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:52:45.370: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:52:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:53:12.036: INFO: waiting for 3 replicas (current: 2) Nov 6 02:53:15.408: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:53:15.408: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:53:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:53:45.445: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:53:45.445: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:53:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:54:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:54:15.481: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:54:15.481: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:54:32.036: INFO: waiting for 3 replicas (current: 2) Nov 6 02:54:45.518: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:54:45.518: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:54:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:55:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:55:15.555: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:55:15.555: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:55:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:55:45.594: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:55:45.594: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:55:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:56:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:56:15.631: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:56:15.631: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:56:32.036: INFO: waiting for 3 replicas (current: 2) Nov 6 02:56:45.669: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:56:45.669: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:56:52.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:57:12.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:57:15.710: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:57:15.710: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:57:32.037: INFO: waiting for 3 replicas (current: 2) Nov 6 02:57:45.754: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:57:45.755: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:57:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:58:12.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:58:15.791: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:58:15.791: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:58:32.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:58:45.832: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:58:45.832: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:58:52.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:59:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:59:15.872: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:59:15.872: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:59:32.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:59:45.908: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:59:45.908: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:59:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:00:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:00:15.944: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 03:00:15.944: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 03:00:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:00:45.984: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 03:00:45.984: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 03:00:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:01:12.035: INFO: waiting for 3 replicas (current: 2) Nov 6 03:01:16.022: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 03:01:16.023: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 03:01:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:01:46.059: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 03:01:46.059: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 03:01:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:02:12.057: INFO: waiting for 3 replicas (current: 2) Nov 6 03:02:12.085: INFO: waiting for 3 replicas (current: 2) Nov 6 03:02:12.085: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205cd0>: { s: "timed out waiting for the condition", } Nov 6 03:02:12.086: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc000749e68, {0x74a0e0e?, 0xc003d3eb40?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000fcdd10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x747b2ea, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.1.2() test/e2e/autoscaling/horizontal_pod_autoscaling.go:158 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/06/22 03:02:12.124�[0m Nov 6 03:02:12.125: INFO: RC test-deployment: stopping metric consumer Nov 6 03:02:12.125: INFO: RC test-deployment: stopping mem consumer Nov 6 03:02:12.125: INFO: RC test-deployment: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-1019, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:02:22.125�[0m Nov 6 03:02:22.241: INFO: Deleting Deployment.apps test-deployment took: 35.785658ms Nov 6 03:02:22.342: INFO: Terminating Deployment.apps test-deployment pods took: 101.309969ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-1019, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:02:24.531�[0m Nov 6 03:02:24.643: INFO: Deleting ReplicationController test-deployment-ctrl took: 32.546442ms Nov 6 03:02:24.743: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.630374ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/node/init/init.go:32 Nov 6 03:02:26.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/06/22 03:02:26.529�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-1019". �[38;5;243m11/06/22 03:02:26.529�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/06/22 03:02:26.558�[0m Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:46 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 1 Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:46 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-xxjkg Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:46 +0000 UTC - event for test-deployment-669bb6996d-xxjkg: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1019/test-deployment-669bb6996d-xxjkg to capz-conf-ppc2q Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:48 +0000 UTC - event for test-deployment-669bb6996d-xxjkg: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:49 +0000 UTC - event for test-deployment-669bb6996d-xxjkg: {kubelet capz-conf-ppc2q} Created: Created container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:50 +0000 UTC - event for test-deployment-669bb6996d-xxjkg: {kubelet capz-conf-ppc2q} Started: Started container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:56 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-qtbr2 Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:56 +0000 UTC - event for test-deployment-ctrl-qtbr2: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1019/test-deployment-ctrl-qtbr2 to capz-conf-6qqvv Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:59 +0000 UTC - event for test-deployment-ctrl-qtbr2: {kubelet capz-conf-6qqvv} Created: Created container test-deployment-ctrl Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:59 +0000 UTC - event for test-deployment-ctrl-qtbr2: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:01 +0000 UTC - event for test-deployment-ctrl-qtbr2: {kubelet capz-conf-6qqvv} Started: Started container test-deployment-ctrl Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:26 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: memory resource above target Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:26 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 2 from 1 Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:27 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-p5725 Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:27 +0000 UTC - event for test-deployment-669bb6996d-p5725: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1019/test-deployment-669bb6996d-p5725 to capz-conf-6qqvv Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:29 +0000 UTC - event for test-deployment-669bb6996d-p5725: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:30 +0000 UTC - event for test-deployment-669bb6996d-p5725: {kubelet capz-conf-6qqvv} Created: Created container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:31 +0000 UTC - event for test-deployment-669bb6996d-p5725: {kubelet capz-conf-6qqvv} Started: Started container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 03:02:22 +0000 UTC - event for test-deployment-669bb6996d-p5725: {kubelet capz-conf-6qqvv} Killing: Stopping container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 03:02:22 +0000 UTC - event for test-deployment-669bb6996d-xxjkg: {kubelet capz-conf-ppc2q} Killing: Stopping container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 03:02:24 +0000 UTC - event for test-deployment-ctrl-qtbr2: {kubelet capz-conf-6qqvv} Killing: Stopping container test-deployment-ctrl Nov 6 03:02:26.602: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 03:02:26.602: INFO: Nov 6 03:02:26.633: INFO: Logging node info for node capz-conf-6qqvv Nov 6 03:02:26.661: INFO: Node Info: &Node{ObjectMeta:{capz-conf-6qqvv 21ca7817-6572-4b5d-812e-ce0eb0d5f68a 14680 0 2022-11-06 01:05:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-6qqvv kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-qszhv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.43.193 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:58:86:f4 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 03:00:19 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-6qqvv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:19 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:19 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:19 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:00:19 +0000 UTC,LastTransitionTime:2022-11-06 01:05:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-6qqvv,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-6qqvv,SystemUUID:4FBA08C6-3CF7-43A9-B47F-5DD6399E03F4,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:02:26.662: INFO: Logging kubelet events for node capz-conf-6qqvv Nov 6 03:02:26.690: INFO: Logging pods the kubelet thinks is on node capz-conf-6qqvv Nov 6 03:02:26.736: INFO: calico-node-windows-wq7jf started at 2022-11-06 01:05:13 +0000 UTC (1+2 container statuses recorded) Nov 6 03:02:26.736: INFO: Init container install-cni ready: true, restart count 0 Nov 6 03:02:26.736: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 03:02:26.737: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 03:02:26.737: INFO: containerd-logger-4c4v9 started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.737: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 03:02:26.737: INFO: csi-proxy-d7klv started at 2022-11-06 01:05:43 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.737: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 03:02:26.737: INFO: kube-proxy-windows-mg9dn started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.737: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 03:02:26.893: INFO: Latency metrics for node capz-conf-6qqvv Nov 6 03:02:26.894: INFO: Logging node info for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:02:26.923: INFO: Node Info: &Node{ObjectMeta:{capz-conf-gdu8bn-control-plane-tjg6t 1b062db8-a1d5-4d72-b97f-3f553f9a80bc 14799 0 2022-11-06 01:02:47 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-gdu8bn-control-plane-tjg6t kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-control-plane-r9dv5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.255.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-06 01:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-06 01:02:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-06 01:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-06 01:03:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-06 01:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-06 03:01:42 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-gdu8bn-control-plane-tjg6t,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-06 01:03:36 +0000 UTC,LastTransitionTime:2022-11-06 01:03:36 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:01:42 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:01:42 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:01:42 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:01:42 +0000 UTC,LastTransitionTime:2022-11-06 01:03:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-gdu8bn-control-plane-tjg6t,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78d5859e57514e33b16c735e58b1e9ed,SystemUUID:000037f3-aea5-d84d-b6e2-269548336f74,BootID:2d661860-3c1f-4907-aa6c-ac6c2ce1dffc,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:132977107,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:120025913,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:66202310,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:53027640,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:02:26.924: INFO: Logging kubelet events for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:02:26.952: INFO: Logging pods the kubelet thinks is on node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:02:26.999: INFO: etcd-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container etcd ready: true, restart count 0 Nov 6 03:02:26.999: INFO: kube-apiserver-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 03:02:26.999: INFO: calico-node-4tbpv started at 2022-11-06 01:03:13 +0000 UTC (2+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 6 03:02:26.999: INFO: Init container install-cni ready: true, restart count 0 Nov 6 03:02:26.999: INFO: Container calico-node ready: true, restart count 0 Nov 6 03:02:26.999: INFO: calico-kube-controllers-56c5ff4bf8-c9gck started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 6 03:02:26.999: INFO: metrics-server-954b56d74-tp2lc started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container metrics-server ready: true, restart count 0 Nov 6 03:02:26.999: INFO: coredns-64475449fc-jxwjm started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container coredns ready: true, restart count 0 Nov 6 03:02:26.999: INFO: kube-scheduler-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:54 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container kube-scheduler ready: true, restart count 0 Nov 6 03:02:26.999: INFO: kube-proxy-gv5gt started at 2022-11-06 01:02:55 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 03:02:26.999: INFO: coredns-64475449fc-9kgrz started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container coredns ready: true, restart count 0 Nov 6 03:02:26.999: INFO: kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 6 03:02:27.146: INFO: Latency metrics for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:02:27.146: INFO: Logging node info for node capz-conf-ppc2q Nov 6 03:02:27.175: INFO: Node Info: &Node{ObjectMeta:{capz-conf-ppc2q 0e9bff17-74db-40c4-85fd-565404c5c796 14683 0 2022-11-06 01:05:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-ppc2q kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-swkgv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.41.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:f9:7f:62 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 03:00:20 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-ppc2q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:20 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:20 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:20 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:00:20 +0000 UTC,LastTransitionTime:2022-11-06 01:05:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-ppc2q,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-ppc2q,SystemUUID:D6A1F803-1C65-4D68-BCD7-387A75C6EDBD,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:02:27.176: INFO: Logging kubelet events for node capz-conf-ppc2q Nov 6 03:02:27.208: INFO: Logging pods the kubelet thinks is on node capz-conf-ppc2q Nov 6 03:02:27.251: INFO: kube-proxy-windows-vmt8g started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:27.251: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 03:02:27.251: INFO: calico-node-windows-hsdvh started at 2022-11-06 01:05:08 +0000 UTC (1+2 container statuses recorded) Nov 6 03:02:27.251: INFO: Init container install-cni ready: true, restart count 0 Nov 6 03:02:27.251: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 03:02:27.251: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 03:02:27.251: INFO: csi-proxy-vqp4q started at 2022-11-06 01:05:39 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:27.251: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 03:02:27.251: INFO: containerd-logger-s25tr started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:27.251: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 03:02:27.393: INFO: Latency metrics for node capz-conf-ppc2q [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1019" for this suite. �[38;5;243m11/06/22 03:02:27.394�[0m �[38;5;243m------------------------------�[0m �[38;5;9m• [FAILED] [940.985 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment (Pod Resource) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:153�[0m �[38;5;9m�[1m[It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:157�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 02:46:46.452�[0m Nov 6 02:46:46.452: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 02:46:46.453�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 02:46:46.549�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 02:46:46.604�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:157 Nov 6 02:46:46.659: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 02:46:46.66�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-1019 �[38;5;243m11/06/22 02:46:46.701�[0m I1106 02:46:46.736082 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-1019, replica count: 1 I1106 02:46:56.787278 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 02:46:56.787�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-1019 �[38;5;243m11/06/22 02:46:56.826�[0m I1106 02:46:56.862714 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-1019, replica count: 1 I1106 02:47:06.914333 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 02:47:11.915: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 6 02:47:11.943: INFO: RC test-deployment: consume 0 millicores in total Nov 6 02:47:11.943: INFO: RC test-deployment: disabling CPU consumption Nov 6 02:47:11.943: INFO: RC test-deployment: consume 250 MB in total Nov 6 02:47:11.943: INFO: RC test-deployment: setting consumption to 250 MB in total Nov 6 02:47:11.943: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:47:11.943: INFO: RC test-deployment: consume custom metric 0 in total Nov 6 02:47:11.943: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 6 02:47:11.943: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:47:12.005: INFO: waiting for 3 replicas (current: 1) Nov 6 02:47:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:47:41.997: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:47:41.997: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:47:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:48:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:48:15.032: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:48:15.033: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:48:32.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:48:45.071: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:48:45.073: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:48:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:49:12.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:49:15.110: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:49:15.110: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:49:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:49:45.146: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:49:45.147: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:49:52.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:50:12.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:50:15.182: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:50:15.183: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:50:32.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:50:45.220: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:50:45.220: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:50:52.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:51:12.038: INFO: waiting for 3 replicas (current: 2) Nov 6 02:51:15.257: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:51:15.257: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:51:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:51:45.293: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:51:45.293: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:51:52.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:52:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:52:15.332: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:52:15.332: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:52:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:52:45.370: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:52:45.370: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:52:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:53:12.036: INFO: waiting for 3 replicas (current: 2) Nov 6 02:53:15.408: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:53:15.408: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:53:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:53:45.445: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:53:45.445: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:53:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:54:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:54:15.481: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:54:15.481: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:54:32.036: INFO: waiting for 3 replicas (current: 2) Nov 6 02:54:45.518: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:54:45.518: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:54:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:55:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:55:15.555: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:55:15.555: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:55:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:55:45.594: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:55:45.594: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:55:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:56:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:56:15.631: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:56:15.631: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:56:32.036: INFO: waiting for 3 replicas (current: 2) Nov 6 02:56:45.669: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:56:45.669: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:56:52.033: INFO: waiting for 3 replicas (current: 2) Nov 6 02:57:12.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:57:15.710: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:57:15.710: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:57:32.037: INFO: waiting for 3 replicas (current: 2) Nov 6 02:57:45.754: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:57:45.755: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:57:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:58:12.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:58:15.791: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:58:15.791: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:58:32.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:58:45.832: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:58:45.832: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:58:52.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:59:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 02:59:15.872: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:59:15.872: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:59:32.035: INFO: waiting for 3 replicas (current: 2) Nov 6 02:59:45.908: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 02:59:45.908: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 02:59:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:00:12.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:00:15.944: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 03:00:15.944: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 03:00:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:00:45.984: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 03:00:45.984: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 03:00:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:01:12.035: INFO: waiting for 3 replicas (current: 2) Nov 6 03:01:16.022: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 03:01:16.023: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 03:01:32.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:01:46.059: INFO: RC test-deployment: sending request to consume 250 MB Nov 6 03:01:46.059: INFO: ConsumeMem URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1019/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 6 03:01:52.034: INFO: waiting for 3 replicas (current: 2) Nov 6 03:02:12.057: INFO: waiting for 3 replicas (current: 2) Nov 6 03:02:12.085: INFO: waiting for 3 replicas (current: 2) Nov 6 03:02:12.085: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205cd0>: { s: "timed out waiting for the condition", } Nov 6 03:02:12.086: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc000749e68, {0x74a0e0e?, 0xc003d3eb40?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000fcdd10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x747b2ea, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.1.2() test/e2e/autoscaling/horizontal_pod_autoscaling.go:158 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/06/22 03:02:12.124�[0m Nov 6 03:02:12.125: INFO: RC test-deployment: stopping metric consumer Nov 6 03:02:12.125: INFO: RC test-deployment: stopping mem consumer Nov 6 03:02:12.125: INFO: RC test-deployment: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-1019, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:02:22.125�[0m Nov 6 03:02:22.241: INFO: Deleting Deployment.apps test-deployment took: 35.785658ms Nov 6 03:02:22.342: INFO: Terminating Deployment.apps test-deployment pods took: 101.309969ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-1019, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:02:24.531�[0m Nov 6 03:02:24.643: INFO: Deleting ReplicationController test-deployment-ctrl took: 32.546442ms Nov 6 03:02:24.743: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.630374ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/node/init/init.go:32 Nov 6 03:02:26.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/06/22 03:02:26.529�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-1019". �[38;5;243m11/06/22 03:02:26.529�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/06/22 03:02:26.558�[0m Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:46 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 1 Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:46 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-xxjkg Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:46 +0000 UTC - event for test-deployment-669bb6996d-xxjkg: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1019/test-deployment-669bb6996d-xxjkg to capz-conf-ppc2q Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:48 +0000 UTC - event for test-deployment-669bb6996d-xxjkg: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:49 +0000 UTC - event for test-deployment-669bb6996d-xxjkg: {kubelet capz-conf-ppc2q} Created: Created container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:50 +0000 UTC - event for test-deployment-669bb6996d-xxjkg: {kubelet capz-conf-ppc2q} Started: Started container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:56 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-qtbr2 Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:56 +0000 UTC - event for test-deployment-ctrl-qtbr2: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1019/test-deployment-ctrl-qtbr2 to capz-conf-6qqvv Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:59 +0000 UTC - event for test-deployment-ctrl-qtbr2: {kubelet capz-conf-6qqvv} Created: Created container test-deployment-ctrl Nov 6 03:02:26.559: INFO: At 2022-11-06 02:46:59 +0000 UTC - event for test-deployment-ctrl-qtbr2: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:01 +0000 UTC - event for test-deployment-ctrl-qtbr2: {kubelet capz-conf-6qqvv} Started: Started container test-deployment-ctrl Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:26 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: memory resource above target Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:26 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 2 from 1 Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:27 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-p5725 Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:27 +0000 UTC - event for test-deployment-669bb6996d-p5725: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-1019/test-deployment-669bb6996d-p5725 to capz-conf-6qqvv Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:29 +0000 UTC - event for test-deployment-669bb6996d-p5725: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:30 +0000 UTC - event for test-deployment-669bb6996d-p5725: {kubelet capz-conf-6qqvv} Created: Created container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 02:47:31 +0000 UTC - event for test-deployment-669bb6996d-p5725: {kubelet capz-conf-6qqvv} Started: Started container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 03:02:22 +0000 UTC - event for test-deployment-669bb6996d-p5725: {kubelet capz-conf-6qqvv} Killing: Stopping container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 03:02:22 +0000 UTC - event for test-deployment-669bb6996d-xxjkg: {kubelet capz-conf-ppc2q} Killing: Stopping container test-deployment Nov 6 03:02:26.559: INFO: At 2022-11-06 03:02:24 +0000 UTC - event for test-deployment-ctrl-qtbr2: {kubelet capz-conf-6qqvv} Killing: Stopping container test-deployment-ctrl Nov 6 03:02:26.602: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 03:02:26.602: INFO: Nov 6 03:02:26.633: INFO: Logging node info for node capz-conf-6qqvv Nov 6 03:02:26.661: INFO: Node Info: &Node{ObjectMeta:{capz-conf-6qqvv 21ca7817-6572-4b5d-812e-ce0eb0d5f68a 14680 0 2022-11-06 01:05:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-6qqvv kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-qszhv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.43.193 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:58:86:f4 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 03:00:19 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-6qqvv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:19 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:19 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:19 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:00:19 +0000 UTC,LastTransitionTime:2022-11-06 01:05:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-6qqvv,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-6qqvv,SystemUUID:4FBA08C6-3CF7-43A9-B47F-5DD6399E03F4,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:02:26.662: INFO: Logging kubelet events for node capz-conf-6qqvv Nov 6 03:02:26.690: INFO: Logging pods the kubelet thinks is on node capz-conf-6qqvv Nov 6 03:02:26.736: INFO: calico-node-windows-wq7jf started at 2022-11-06 01:05:13 +0000 UTC (1+2 container statuses recorded) Nov 6 03:02:26.736: INFO: Init container install-cni ready: true, restart count 0 Nov 6 03:02:26.736: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 03:02:26.737: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 03:02:26.737: INFO: containerd-logger-4c4v9 started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.737: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 03:02:26.737: INFO: csi-proxy-d7klv started at 2022-11-06 01:05:43 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.737: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 03:02:26.737: INFO: kube-proxy-windows-mg9dn started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.737: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 03:02:26.893: INFO: Latency metrics for node capz-conf-6qqvv Nov 6 03:02:26.894: INFO: Logging node info for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:02:26.923: INFO: Node Info: &Node{ObjectMeta:{capz-conf-gdu8bn-control-plane-tjg6t 1b062db8-a1d5-4d72-b97f-3f553f9a80bc 14799 0 2022-11-06 01:02:47 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-gdu8bn-control-plane-tjg6t kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-control-plane-r9dv5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.255.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-06 01:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-06 01:02:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-06 01:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-06 01:03:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-06 01:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-06 03:01:42 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-gdu8bn-control-plane-tjg6t,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-06 01:03:36 +0000 UTC,LastTransitionTime:2022-11-06 01:03:36 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:01:42 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:01:42 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:01:42 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:01:42 +0000 UTC,LastTransitionTime:2022-11-06 01:03:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-gdu8bn-control-plane-tjg6t,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78d5859e57514e33b16c735e58b1e9ed,SystemUUID:000037f3-aea5-d84d-b6e2-269548336f74,BootID:2d661860-3c1f-4907-aa6c-ac6c2ce1dffc,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:132977107,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:120025913,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:66202310,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:53027640,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:02:26.924: INFO: Logging kubelet events for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:02:26.952: INFO: Logging pods the kubelet thinks is on node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:02:26.999: INFO: etcd-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container etcd ready: true, restart count 0 Nov 6 03:02:26.999: INFO: kube-apiserver-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 03:02:26.999: INFO: calico-node-4tbpv started at 2022-11-06 01:03:13 +0000 UTC (2+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 6 03:02:26.999: INFO: Init container install-cni ready: true, restart count 0 Nov 6 03:02:26.999: INFO: Container calico-node ready: true, restart count 0 Nov 6 03:02:26.999: INFO: calico-kube-controllers-56c5ff4bf8-c9gck started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 6 03:02:26.999: INFO: metrics-server-954b56d74-tp2lc started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container metrics-server ready: true, restart count 0 Nov 6 03:02:26.999: INFO: coredns-64475449fc-jxwjm started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container coredns ready: true, restart count 0 Nov 6 03:02:26.999: INFO: kube-scheduler-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:54 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container kube-scheduler ready: true, restart count 0 Nov 6 03:02:26.999: INFO: kube-proxy-gv5gt started at 2022-11-06 01:02:55 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 03:02:26.999: INFO: coredns-64475449fc-9kgrz started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container coredns ready: true, restart count 0 Nov 6 03:02:26.999: INFO: kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:26.999: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 6 03:02:27.146: INFO: Latency metrics for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:02:27.146: INFO: Logging node info for node capz-conf-ppc2q Nov 6 03:02:27.175: INFO: Node Info: &Node{ObjectMeta:{capz-conf-ppc2q 0e9bff17-74db-40c4-85fd-565404c5c796 14683 0 2022-11-06 01:05:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-ppc2q kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-swkgv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.41.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:f9:7f:62 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 03:00:20 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-ppc2q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:20 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:20 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:00:20 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:00:20 +0000 UTC,LastTransitionTime:2022-11-06 01:05:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-ppc2q,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-ppc2q,SystemUUID:D6A1F803-1C65-4D68-BCD7-387A75C6EDBD,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:02:27.176: INFO: Logging kubelet events for node capz-conf-ppc2q Nov 6 03:02:27.208: INFO: Logging pods the kubelet thinks is on node capz-conf-ppc2q Nov 6 03:02:27.251: INFO: kube-proxy-windows-vmt8g started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:27.251: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 03:02:27.251: INFO: calico-node-windows-hsdvh started at 2022-11-06 01:05:08 +0000 UTC (1+2 container statuses recorded) Nov 6 03:02:27.251: INFO: Init container install-cni ready: true, restart count 0 Nov 6 03:02:27.251: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 03:02:27.251: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 03:02:27.251: INFO: csi-proxy-vqp4q started at 2022-11-06 01:05:39 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:27.251: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 03:02:27.251: INFO: containerd-logger-s25tr started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 03:02:27.251: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 03:02:27.393: INFO: Latency metrics for node capz-conf-ppc2q [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-1019" for this suite. �[38;5;243m11/06/22 03:02:27.394�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;9mNov 6 03:02:12.086: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition�[0m �[38;5;9mIn �[1m[It]�[0m�[38;5;9m at: �[1mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:209�[0m �[38;5;9mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc000749e68, {0x74a0e0e?, 0xc003d3eb40?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000fcdd10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x747b2ea, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.1.2() test/e2e/autoscaling/horizontal_pod_autoscaling.go:158 +0x88 �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] GMSA Kubelet [Slow] �[38;5;243mkubelet GMSA support �[0mwhen creating a pod with correct GMSA credential specs�[0m �[1mpasses the credential specs down to the Pod's containers�[0m �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:47�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:02:27.44�[0m Nov 6 03:02:27.440: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-kubelet-test-windows �[38;5;243m11/06/22 03:02:27.442�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:02:27.532�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:02:27.586�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/metrics/init/init.go:31 [It] passes the credential specs down to the Pod's containers test/e2e/windows/gmsa_kubelet.go:47 �[1mSTEP:�[0m creating a pod with correct GMSA specs �[38;5;243m11/06/22 03:02:27.641�[0m Nov 6 03:02:27.678: INFO: Waiting up to 5m0s for pod "with-correct-gmsa-specs" in namespace "gmsa-kubelet-test-windows-1977" to be "running and ready" Nov 6 03:02:27.709: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 31.733635ms Nov 6 03:02:27.709: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 6 03:02:29.738: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060806963s Nov 6 03:02:29.739: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 6 03:02:31.738: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06040472s Nov 6 03:02:31.738: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 6 03:02:33.738: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060412837s Nov 6 03:02:33.738: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 6 03:02:35.740: INFO: Pod "with-correct-gmsa-specs": Phase="Running", Reason="", readiness=true. Elapsed: 8.061983317s Nov 6 03:02:35.740: INFO: The phase of Pod with-correct-gmsa-specs is Running (Ready = true) Nov 6 03:02:35.740: INFO: Pod "with-correct-gmsa-specs" satisfied condition "running and ready" �[1mSTEP:�[0m checking the domain reported by nltest in the containers �[38;5;243m11/06/22 03:02:35.768�[0m Nov 6 03:02:35.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-1977 exec --namespace=gmsa-kubelet-test-windows-1977 with-correct-gmsa-specs --container=container1 -- nltest /PARENTDOMAIN' Nov 6 03:02:36.551: INFO: stderr: "" Nov 6 03:02:36.551: INFO: stdout: "acme.com. (1)\r\nThe command completed successfully\r\n" Nov 6 03:02:36.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-1977 exec --namespace=gmsa-kubelet-test-windows-1977 with-correct-gmsa-specs --container=container2 -- nltest /PARENTDOMAIN' Nov 6 03:02:37.094: INFO: stderr: "" Nov 6 03:02:37.094: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n" [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/node/init/init.go:32 Nov 6 03:02:37.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gmsa-kubelet-test-windows-1977" for this suite. �[38;5;243m11/06/22 03:02:37.126�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [9.721 seconds]�[0m [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m kubelet GMSA support �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:45�[0m when creating a pod with correct GMSA credential specs �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:46�[0m passes the credential specs down to the Pod's containers �[38;5;243mtest/e2e/windows/gmsa_kubelet.go:47�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:02:27.44�[0m Nov 6 03:02:27.440: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename gmsa-kubelet-test-windows �[38;5;243m11/06/22 03:02:27.442�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:02:27.532�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:02:27.586�[0m [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/metrics/init/init.go:31 [It] passes the credential specs down to the Pod's containers test/e2e/windows/gmsa_kubelet.go:47 �[1mSTEP:�[0m creating a pod with correct GMSA specs �[38;5;243m11/06/22 03:02:27.641�[0m Nov 6 03:02:27.678: INFO: Waiting up to 5m0s for pod "with-correct-gmsa-specs" in namespace "gmsa-kubelet-test-windows-1977" to be "running and ready" Nov 6 03:02:27.709: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 31.733635ms Nov 6 03:02:27.709: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 6 03:02:29.738: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060806963s Nov 6 03:02:29.739: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 6 03:02:31.738: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06040472s Nov 6 03:02:31.738: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 6 03:02:33.738: INFO: Pod "with-correct-gmsa-specs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060412837s Nov 6 03:02:33.738: INFO: The phase of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Nov 6 03:02:35.740: INFO: Pod "with-correct-gmsa-specs": Phase="Running", Reason="", readiness=true. Elapsed: 8.061983317s Nov 6 03:02:35.740: INFO: The phase of Pod with-correct-gmsa-specs is Running (Ready = true) Nov 6 03:02:35.740: INFO: Pod "with-correct-gmsa-specs" satisfied condition "running and ready" �[1mSTEP:�[0m checking the domain reported by nltest in the containers �[38;5;243m11/06/22 03:02:35.768�[0m Nov 6 03:02:35.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-1977 exec --namespace=gmsa-kubelet-test-windows-1977 with-correct-gmsa-specs --container=container1 -- nltest /PARENTDOMAIN' Nov 6 03:02:36.551: INFO: stderr: "" Nov 6 03:02:36.551: INFO: stdout: "acme.com. (1)\r\nThe command completed successfully\r\n" Nov 6 03:02:36.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-1977 exec --namespace=gmsa-kubelet-test-windows-1977 with-correct-gmsa-specs --container=container2 -- nltest /PARENTDOMAIN' Nov 6 03:02:37.094: INFO: stderr: "" Nov 6 03:02:37.094: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n" [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/node/init/init.go:32 Nov 6 03:02:37.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gmsa-kubelet-test-windows-1977" for this suite. �[38;5;243m11/06/22 03:02:37.126�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith autoscaling disabled�[0m �[1mshouldn't scale down�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:173�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:02:37.165�[0m Nov 6 03:02:37.165: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 03:02:37.166�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:02:37.256�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:02:37.31�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] shouldn't scale down test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:173 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/06/22 03:02:37.366�[0m Nov 6 03:02:37.366: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 3 replicas �[38;5;243m11/06/22 03:02:37.368�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-7647 �[38;5;243m11/06/22 03:02:37.432�[0m I1106 03:02:37.469867 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-7647, replica count: 3 I1106 03:02:47.521297 14 runners.go:193] consumer Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 03:02:47.521�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-7647 �[38;5;243m11/06/22 03:02:47.573�[0m I1106 03:02:47.609045 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-7647, replica count: 1 I1106 03:02:57.659778 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 03:03:02.661: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 6 03:03:02.690: INFO: RC consumer: consume 330 millicores in total Nov 6 03:03:02.690: INFO: RC consumer: setting consumption to 330 millicores in total Nov 6 03:03:02.690: INFO: RC consumer: sending request to consume 330 millicores Nov 6 03:03:02.690: INFO: RC consumer: consume 0 MB in total Nov 6 03:03:02.690: INFO: RC consumer: disabling mem consumption Nov 6 03:03:02.690: INFO: RC consumer: consume custom metric 0 in total Nov 6 03:03:02.690: INFO: RC consumer: disabling consumption of custom metric QPS Nov 6 03:03:02.690: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } �[1mSTEP:�[0m trying to trigger scale down �[38;5;243m11/06/22 03:03:02.722�[0m Nov 6 03:03:02.723: INFO: RC consumer: consume 110 millicores in total Nov 6 03:03:02.750: INFO: RC consumer: setting consumption to 110 millicores in total Nov 6 03:03:02.778: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:02.807: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 6 03:03:12.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:12.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 6 03:03:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:22.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00075e820} Nov 6 03:03:32.750: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:03:32.750: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:03:32.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:32.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00075e970} Nov 6 03:03:42.838: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:42.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0d50} Nov 6 03:03:52.838: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:52.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae1000} Nov 6 03:04:02.787: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:04:02.788: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:04:02.852: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:02.880: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00075ec70} Nov 6 03:04:12.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:12.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae1440} Nov 6 03:04:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:22.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4b600} Nov 6 03:04:32.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:32.844: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:04:32.844: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:04:32.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4b9d0} Nov 6 03:04:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:42.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4bd20} Nov 6 03:04:52.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:52.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f880d0} Nov 6 03:05:02.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f881b0} Nov 6 03:05:02.881: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:05:02.882: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:05:12.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:12.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0480} Nov 6 03:05:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:22.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0710} Nov 6 03:05:32.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:32.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f885e0} Nov 6 03:05:32.919: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:05:32.919: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:05:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:42.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0890} Nov 6 03:05:52.839: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:52.867: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0bd0} Nov 6 03:06:02.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88aa0} Nov 6 03:06:02.955: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:06:02.955: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:06:12.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:12.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88d50} Nov 6 03:06:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:22.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f89010} Nov 6 03:06:32.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:32.867: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f892b0} Nov 6 03:06:32.992: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:06:32.992: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:06:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:42.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0cb0} Nov 6 03:06:52.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:52.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0270} Nov 6 03:07:02.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0500} Nov 6 03:07:03.028: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:07:03.029: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:07:12.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:12.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88180} Nov 6 03:07:22.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:22.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88260} Nov 6 03:07:32.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:32.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0990} Nov 6 03:07:33.066: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:07:33.066: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:07:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:42.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f883c0} Nov 6 03:07:52.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:52.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae1010} Nov 6 03:08:02.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae12d0} Nov 6 03:08:03.103: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:08:03.103: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:08:12.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:12.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88720} Nov 6 03:08:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:22.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae17a0} Nov 6 03:08:32.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:32.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae1880} Nov 6 03:08:33.139: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:08:33.139: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:08:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:42.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae1d50} Nov 6 03:08:52.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:52.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0260} Nov 6 03:09:02.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f882f0} Nov 6 03:09:03.181: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:09:03.181: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:09:12.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:12.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88760} Nov 6 03:09:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:22.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae05c0} Nov 6 03:09:32.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:32.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4a2b0} Nov 6 03:09:33.222: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:09:33.222: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:09:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:42.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae06d0} Nov 6 03:09:52.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:52.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae07d0} Nov 6 03:10:02.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:10:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88f90} Nov 6 03:10:03.258: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:10:03.258: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:10:12.835: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:10:12.863: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4a680} Nov 6 03:10:22.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:10:22.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4a990} Nov 6 03:10:32.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:10:32.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4ac50} Nov 6 03:10:32.893: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:10:32.921: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f89300} Nov 6 03:10:32.921: INFO: Number of replicas was stable over 7m30s �[1mSTEP:�[0m verifying time waited for a scale down �[38;5;243m11/06/22 03:10:32.921�[0m Nov 6 03:10:32.922: INFO: time waited for scale down: 7m30.171489876s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m11/06/22 03:10:32.922�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/06/22 03:10:32.982�[0m Nov 6 03:10:32.983: INFO: RC consumer: stopping metric consumer Nov 6 03:10:32.983: INFO: RC consumer: stopping mem consumer Nov 6 03:10:32.983: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-7647, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:10:42.985�[0m Nov 6 03:10:43.096: INFO: Deleting Deployment.apps consumer took: 31.932624ms Nov 6 03:10:43.196: INFO: Terminating Deployment.apps consumer pods took: 100.722925ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-7647, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:10:45.966�[0m Nov 6 03:10:46.077: INFO: Deleting ReplicationController consumer-ctrl took: 32.102464ms Nov 6 03:10:46.178: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.782592ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 6 03:10:47.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-7647" for this suite. �[38;5;243m11/06/22 03:10:47.865�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [490.735 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with autoscaling disabled �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:137�[0m shouldn't scale down �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:173�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:02:37.165�[0m Nov 6 03:02:37.165: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 03:02:37.166�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:02:37.256�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:02:37.31�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] shouldn't scale down test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:173 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/06/22 03:02:37.366�[0m Nov 6 03:02:37.366: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 3 replicas �[38;5;243m11/06/22 03:02:37.368�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-7647 �[38;5;243m11/06/22 03:02:37.432�[0m I1106 03:02:37.469867 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-7647, replica count: 3 I1106 03:02:47.521297 14 runners.go:193] consumer Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 03:02:47.521�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-7647 �[38;5;243m11/06/22 03:02:47.573�[0m I1106 03:02:47.609045 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-7647, replica count: 1 I1106 03:02:57.659778 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 03:03:02.661: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 6 03:03:02.690: INFO: RC consumer: consume 330 millicores in total Nov 6 03:03:02.690: INFO: RC consumer: setting consumption to 330 millicores in total Nov 6 03:03:02.690: INFO: RC consumer: sending request to consume 330 millicores Nov 6 03:03:02.690: INFO: RC consumer: consume 0 MB in total Nov 6 03:03:02.690: INFO: RC consumer: disabling mem consumption Nov 6 03:03:02.690: INFO: RC consumer: consume custom metric 0 in total Nov 6 03:03:02.690: INFO: RC consumer: disabling consumption of custom metric QPS Nov 6 03:03:02.690: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } �[1mSTEP:�[0m trying to trigger scale down �[38;5;243m11/06/22 03:03:02.722�[0m Nov 6 03:03:02.723: INFO: RC consumer: consume 110 millicores in total Nov 6 03:03:02.750: INFO: RC consumer: setting consumption to 110 millicores in total Nov 6 03:03:02.778: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:02.807: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 6 03:03:12.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:12.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 6 03:03:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:22.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00075e820} Nov 6 03:03:32.750: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:03:32.750: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:03:32.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:32.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00075e970} Nov 6 03:03:42.838: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:42.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0d50} Nov 6 03:03:52.838: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:03:52.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae1000} Nov 6 03:04:02.787: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:04:02.788: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:04:02.852: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:02.880: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc00075ec70} Nov 6 03:04:12.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:12.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae1440} Nov 6 03:04:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:22.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4b600} Nov 6 03:04:32.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:32.844: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:04:32.844: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:04:32.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4b9d0} Nov 6 03:04:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:42.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4bd20} Nov 6 03:04:52.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:04:52.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f880d0} Nov 6 03:05:02.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f881b0} Nov 6 03:05:02.881: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:05:02.882: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:05:12.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:12.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0480} Nov 6 03:05:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:22.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0710} Nov 6 03:05:32.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:32.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f885e0} Nov 6 03:05:32.919: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:05:32.919: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:05:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:42.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0890} Nov 6 03:05:52.839: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:05:52.867: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0bd0} Nov 6 03:06:02.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88aa0} Nov 6 03:06:02.955: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:06:02.955: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:06:12.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:12.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88d50} Nov 6 03:06:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:22.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f89010} Nov 6 03:06:32.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:32.867: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f892b0} Nov 6 03:06:32.992: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:06:32.992: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:06:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:42.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0cb0} Nov 6 03:06:52.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:06:52.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0270} Nov 6 03:07:02.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0500} Nov 6 03:07:03.028: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:07:03.029: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:07:12.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:12.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88180} Nov 6 03:07:22.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:22.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88260} Nov 6 03:07:32.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:32.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0990} Nov 6 03:07:33.066: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:07:33.066: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:07:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:42.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f883c0} Nov 6 03:07:52.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:07:52.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae1010} Nov 6 03:08:02.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae12d0} Nov 6 03:08:03.103: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:08:03.103: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:08:12.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:12.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88720} Nov 6 03:08:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:22.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae17a0} Nov 6 03:08:32.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:32.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae1880} Nov 6 03:08:33.139: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:08:33.139: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:08:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:42.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae1d50} Nov 6 03:08:52.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:08:52.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae0260} Nov 6 03:09:02.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f882f0} Nov 6 03:09:03.181: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:09:03.181: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:09:12.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:12.866: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88760} Nov 6 03:09:22.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:22.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae05c0} Nov 6 03:09:32.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:32.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4a2b0} Nov 6 03:09:33.222: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:09:33.222: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:09:42.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:42.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae06d0} Nov 6 03:09:52.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:09:52.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003ae07d0} Nov 6 03:10:02.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:10:02.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f88f90} Nov 6 03:10:03.258: INFO: RC consumer: sending request to consume 110 millicores Nov 6 03:10:03.258: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7647/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 6 03:10:12.835: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:10:12.863: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4a680} Nov 6 03:10:22.837: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:10:22.865: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4a990} Nov 6 03:10:32.836: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:10:32.864: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f4ac50} Nov 6 03:10:32.893: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 6 03:10:32.921: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:3 CurrentCPUUtilizationPercentage:0xc003f89300} Nov 6 03:10:32.921: INFO: Number of replicas was stable over 7m30s �[1mSTEP:�[0m verifying time waited for a scale down �[38;5;243m11/06/22 03:10:32.921�[0m Nov 6 03:10:32.922: INFO: time waited for scale down: 7m30.171489876s �[1mSTEP:�[0m verifying number of replicas �[38;5;243m11/06/22 03:10:32.922�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/06/22 03:10:32.982�[0m Nov 6 03:10:32.983: INFO: RC consumer: stopping metric consumer Nov 6 03:10:32.983: INFO: RC consumer: stopping mem consumer Nov 6 03:10:32.983: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-7647, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:10:42.985�[0m Nov 6 03:10:43.096: INFO: Deleting Deployment.apps consumer took: 31.932624ms Nov 6 03:10:43.196: INFO: Terminating Deployment.apps consumer pods took: 100.722925ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-7647, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:10:45.966�[0m Nov 6 03:10:46.077: INFO: Deleting ReplicationController consumer-ctrl took: 32.102464ms Nov 6 03:10:46.178: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.782592ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 6 03:10:47.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-7647" for this suite. �[38;5;243m11/06/22 03:10:47.865�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mwith scale limited by percentage�[0m �[1mshould scale up no more than given percentage of current Pods per minute�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:306�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:10:47.909�[0m Nov 6 03:10:47.909: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 03:10:47.91�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:10:48.016�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:10:48.071�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] should scale up no more than given percentage of current Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:306 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/06/22 03:10:48.126�[0m Nov 6 03:10:48.127: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 2 replicas �[38;5;243m11/06/22 03:10:48.128�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-4186 �[38;5;243m11/06/22 03:10:48.17�[0m I1106 03:10:48.204589 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-4186, replica count: 2 I1106 03:10:58.257064 14 runners.go:193] consumer Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 03:10:58.257�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-4186 �[38;5;243m11/06/22 03:10:58.302�[0m I1106 03:10:58.335552 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-4186, replica count: 1 I1106 03:11:08.386479 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 03:11:13.386: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 6 03:11:13.415: INFO: RC consumer: consume 90 millicores in total Nov 6 03:11:13.415: INFO: RC consumer: setting consumption to 90 millicores in total Nov 6 03:11:13.415: INFO: RC consumer: sending request to consume 90 millicores Nov 6 03:11:13.415: INFO: RC consumer: consume 0 MB in total Nov 6 03:11:13.415: INFO: RC consumer: consume custom metric 0 in total Nov 6 03:11:13.415: INFO: RC consumer: disabling consumption of custom metric QPS Nov 6 03:11:13.415: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4186/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=90&requestSizeMillicores=100 } Nov 6 03:11:13.415: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m11/06/22 03:11:13.451�[0m Nov 6 03:11:13.451: INFO: RC consumer: consume 360 millicores in total Nov 6 03:11:13.496: INFO: RC consumer: setting consumption to 360 millicores in total Nov 6 03:11:13.524: INFO: waiting for 3 replicas (current: 2) Nov 6 03:11:33.555: INFO: waiting for 3 replicas (current: 2) Nov 6 03:11:43.497: INFO: RC consumer: sending request to consume 360 millicores Nov 6 03:11:43.497: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4186/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=360&requestSizeMillicores=100 } Nov 6 03:11:53.554: INFO: waiting for 3 replicas (current: 2) Nov 6 03:12:13.554: INFO: waiting for 3 replicas (current: 3) Nov 6 03:12:13.584: INFO: waiting for 5 replicas (current: 3) Nov 6 03:12:16.546: INFO: RC consumer: sending request to consume 360 millicores Nov 6 03:12:16.546: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4186/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=360&requestSizeMillicores=100 } Nov 6 03:12:33.614: INFO: waiting for 5 replicas (current: 3) Nov 6 03:12:46.689: INFO: RC consumer: sending request to consume 360 millicores Nov 6 03:12:46.689: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4186/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=360&requestSizeMillicores=100 } Nov 6 03:12:53.613: INFO: waiting for 5 replicas (current: 3) Nov 6 03:13:13.612: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m verifying time waited for a scale up to 3 replicas �[38;5;243m11/06/22 03:13:13.612�[0m �[1mSTEP:�[0m verifying time waited for a scale up to 5 replicas �[38;5;243m11/06/22 03:13:13.612�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/06/22 03:13:13.645�[0m Nov 6 03:13:13.645: INFO: RC consumer: stopping metric consumer Nov 6 03:13:13.645: INFO: RC consumer: stopping CPU consumer Nov 6 03:13:13.645: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-4186, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:13:23.647�[0m Nov 6 03:13:23.761: INFO: Deleting Deployment.apps consumer took: 33.356287ms Nov 6 03:13:23.862: INFO: Terminating Deployment.apps consumer pods took: 101.119616ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-4186, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:13:26.312�[0m Nov 6 03:13:26.422: INFO: Deleting ReplicationController consumer-ctrl took: 31.915579ms Nov 6 03:13:26.523: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.305672ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 6 03:13:28.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4186" for this suite. �[38;5;243m11/06/22 03:13:28.21�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [160.334 seconds]�[0m [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m with scale limited by percentage �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:301�[0m should scale up no more than given percentage of current Pods per minute �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:306�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:10:47.909�[0m Nov 6 03:10:47.909: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 03:10:47.91�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:10:48.016�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:10:48.071�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] should scale up no more than given percentage of current Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:306 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/06/22 03:10:48.126�[0m Nov 6 03:10:48.127: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 2 replicas �[38;5;243m11/06/22 03:10:48.128�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-4186 �[38;5;243m11/06/22 03:10:48.17�[0m I1106 03:10:48.204589 14 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-4186, replica count: 2 I1106 03:10:58.257064 14 runners.go:193] consumer Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 03:10:58.257�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-4186 �[38;5;243m11/06/22 03:10:58.302�[0m I1106 03:10:58.335552 14 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-4186, replica count: 1 I1106 03:11:08.386479 14 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 03:11:13.386: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 6 03:11:13.415: INFO: RC consumer: consume 90 millicores in total Nov 6 03:11:13.415: INFO: RC consumer: setting consumption to 90 millicores in total Nov 6 03:11:13.415: INFO: RC consumer: sending request to consume 90 millicores Nov 6 03:11:13.415: INFO: RC consumer: consume 0 MB in total Nov 6 03:11:13.415: INFO: RC consumer: consume custom metric 0 in total Nov 6 03:11:13.415: INFO: RC consumer: disabling consumption of custom metric QPS Nov 6 03:11:13.415: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4186/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=90&requestSizeMillicores=100 } Nov 6 03:11:13.415: INFO: RC consumer: disabling mem consumption �[1mSTEP:�[0m triggering scale up by increasing consumption �[38;5;243m11/06/22 03:11:13.451�[0m Nov 6 03:11:13.451: INFO: RC consumer: consume 360 millicores in total Nov 6 03:11:13.496: INFO: RC consumer: setting consumption to 360 millicores in total Nov 6 03:11:13.524: INFO: waiting for 3 replicas (current: 2) Nov 6 03:11:33.555: INFO: waiting for 3 replicas (current: 2) Nov 6 03:11:43.497: INFO: RC consumer: sending request to consume 360 millicores Nov 6 03:11:43.497: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4186/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=360&requestSizeMillicores=100 } Nov 6 03:11:53.554: INFO: waiting for 3 replicas (current: 2) Nov 6 03:12:13.554: INFO: waiting for 3 replicas (current: 3) Nov 6 03:12:13.584: INFO: waiting for 5 replicas (current: 3) Nov 6 03:12:16.546: INFO: RC consumer: sending request to consume 360 millicores Nov 6 03:12:16.546: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4186/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=360&requestSizeMillicores=100 } Nov 6 03:12:33.614: INFO: waiting for 5 replicas (current: 3) Nov 6 03:12:46.689: INFO: RC consumer: sending request to consume 360 millicores Nov 6 03:12:46.689: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4186/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=360&requestSizeMillicores=100 } Nov 6 03:12:53.613: INFO: waiting for 5 replicas (current: 3) Nov 6 03:13:13.612: INFO: waiting for 5 replicas (current: 5) �[1mSTEP:�[0m verifying time waited for a scale up to 3 replicas �[38;5;243m11/06/22 03:13:13.612�[0m �[1mSTEP:�[0m verifying time waited for a scale up to 5 replicas �[38;5;243m11/06/22 03:13:13.612�[0m �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/06/22 03:13:13.645�[0m Nov 6 03:13:13.645: INFO: RC consumer: stopping metric consumer Nov 6 03:13:13.645: INFO: RC consumer: stopping CPU consumer Nov 6 03:13:13.645: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-4186, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:13:23.647�[0m Nov 6 03:13:23.761: INFO: Deleting Deployment.apps consumer took: 33.356287ms Nov 6 03:13:23.862: INFO: Terminating Deployment.apps consumer pods took: 101.119616ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-4186, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:13:26.312�[0m Nov 6 03:13:26.422: INFO: Deleting ReplicationController consumer-ctrl took: 31.915579ms Nov 6 03:13:26.523: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.305672ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 6 03:13:28.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-4186" for this suite. �[38;5;243m11/06/22 03:13:28.21�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] �[38;5;243mAllocatable node memory�[0m �[1mshould be equal to a calculated allocatable memory value�[0m �[38;5;243mtest/e2e/windows/memory_limits.go:54�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:13:28.248�[0m Nov 6 03:13:28.248: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename memory-limit-test-windows �[38;5;243m11/06/22 03:13:28.25�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:13:28.337�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:13:28.392�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should be equal to a calculated allocatable memory value test/e2e/windows/memory_limits.go:54 �[1mSTEP:�[0m Getting memory details from node status and kubelet config �[38;5;243m11/06/22 03:13:28.476�[0m Nov 6 03:13:28.476: INFO: Getting configuration details for node capz-conf-6qqvv Nov 6 03:13:28.519: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}} �[1mSTEP:�[0m Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>} BinarySI} �[38;5;243m11/06/22 03:13:28.519�[0m [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 6 03:13:28.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "memory-limit-test-windows-875" for this suite. �[38;5;243m11/06/22 03:13:28.555�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [0.341 seconds]�[0m [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] �[38;5;243mtest/e2e/windows/framework.go:27�[0m Allocatable node memory �[38;5;243mtest/e2e/windows/memory_limits.go:53�[0m should be equal to a calculated allocatable memory value �[38;5;243mtest/e2e/windows/memory_limits.go:54�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:13:28.248�[0m Nov 6 03:13:28.248: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename memory-limit-test-windows �[38;5;243m11/06/22 03:13:28.25�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:13:28.337�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:13:28.392�[0m [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should be equal to a calculated allocatable memory value test/e2e/windows/memory_limits.go:54 �[1mSTEP:�[0m Getting memory details from node status and kubelet config �[38;5;243m11/06/22 03:13:28.476�[0m Nov 6 03:13:28.476: INFO: Getting configuration details for node capz-conf-6qqvv Nov 6 03:13:28.519: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}} �[1mSTEP:�[0m Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>} BinarySI} �[38;5;243m11/06/22 03:13:28.519�[0m [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 6 03:13:28.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "memory-limit-test-windows-875" for this suite. �[38;5;243m11/06/22 03:13:28.555�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]�[0m �[38;5;243mtest/e2e/common/node/expansion.go:225�[0m [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:13:28.592�[0m Nov 6 03:13:28.592: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m11/06/22 03:13:28.594�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:13:28.68�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:13:28.735�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 �[1mSTEP:�[0m creating the pod with failed condition �[38;5;243m11/06/22 03:13:28.789�[0m Nov 6 03:13:28.823: INFO: Waiting up to 2m0s for pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" in namespace "var-expansion-606" to be "running" Nov 6 03:13:28.855: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.413825ms Nov 6 03:13:30.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061135209s Nov 6 03:13:32.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060304859s Nov 6 03:13:34.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061534052s Nov 6 03:13:36.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059864129s Nov 6 03:13:38.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060919323s Nov 6 03:13:40.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.061447468s Nov 6 03:13:42.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.061063817s Nov 6 03:13:44.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.060676166s Nov 6 03:13:46.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.06039902s Nov 6 03:13:48.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.061920163s Nov 6 03:13:50.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.060015098s Nov 6 03:13:52.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.061934874s Nov 6 03:13:54.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.061627507s Nov 6 03:13:56.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.060789097s Nov 6 03:13:58.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.060824793s Nov 6 03:14:00.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.060725353s Nov 6 03:14:02.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.060743632s Nov 6 03:14:04.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.061678697s Nov 6 03:14:06.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.060272094s Nov 6 03:14:08.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.060231817s Nov 6 03:14:10.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.061762497s Nov 6 03:14:12.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.061426345s Nov 6 03:14:14.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.060220784s Nov 6 03:14:16.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.060609609s Nov 6 03:14:18.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.060016907s Nov 6 03:14:20.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.061294175s Nov 6 03:14:22.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.060268644s Nov 6 03:14:24.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.060405286s Nov 6 03:14:26.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.060025982s Nov 6 03:14:28.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.062341575s Nov 6 03:14:30.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.059952375s Nov 6 03:14:32.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.060727142s Nov 6 03:14:34.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.060148769s Nov 6 03:14:36.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.06193643s Nov 6 03:14:38.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.059874348s Nov 6 03:14:40.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.060643796s Nov 6 03:14:42.888: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.064378747s Nov 6 03:14:44.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.061652645s Nov 6 03:14:46.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.060469824s Nov 6 03:14:48.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.060301528s Nov 6 03:14:50.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.060292272s Nov 6 03:14:52.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.059990145s Nov 6 03:14:54.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.060274968s Nov 6 03:14:56.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.060386831s Nov 6 03:14:58.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.062328377s Nov 6 03:15:00.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.060573695s Nov 6 03:15:02.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.061268828s Nov 6 03:15:04.886: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.062703944s Nov 6 03:15:06.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.060376939s Nov 6 03:15:08.886: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.062362593s Nov 6 03:15:10.886: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.062381949s Nov 6 03:15:12.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.059947064s Nov 6 03:15:14.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.059969242s Nov 6 03:15:16.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.060194842s Nov 6 03:15:18.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.060131014s Nov 6 03:15:20.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.060589349s Nov 6 03:15:22.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.060729335s Nov 6 03:15:24.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.061167831s Nov 6 03:15:26.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.059993559s Nov 6 03:15:28.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.060872897s Nov 6 03:15:28.912: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.08920876s �[1mSTEP:�[0m updating the pod �[38;5;243m11/06/22 03:15:28.912�[0m Nov 6 03:15:29.481: INFO: Successfully updated pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" �[1mSTEP:�[0m waiting for pod running �[38;5;243m11/06/22 03:15:29.481�[0m Nov 6 03:15:29.482: INFO: Waiting up to 2m0s for pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" in namespace "var-expansion-606" to be "running" Nov 6 03:15:29.510: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.29484ms Nov 6 03:15:31.540: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057993401s Nov 6 03:15:33.539: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057142061s Nov 6 03:15:35.541: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059630317s Nov 6 03:15:37.539: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057296153s Nov 6 03:15:39.545: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063431155s Nov 6 03:15:41.540: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Running", Reason="", readiness=true. Elapsed: 12.058124407s Nov 6 03:15:41.540: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" satisfied condition "running" �[1mSTEP:�[0m deleting the pod gracefully �[38;5;243m11/06/22 03:15:41.54�[0m Nov 6 03:15:41.540: INFO: Deleting pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" in namespace "var-expansion-606" Nov 6 03:15:41.573: INFO: Wait up to 5m0s for pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 6 03:15:45.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "var-expansion-606" for this suite. �[38;5;243m11/06/22 03:15:45.662�[0m �[38;5;243m------------------------------�[0m �[38;5;10m• [SLOW TEST] [137.103 seconds]�[0m [sig-node] Variable Expansion �[38;5;243mtest/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[38;5;243mtest/e2e/common/node/expansion.go:225�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:13:28.592�[0m Nov 6 03:13:28.592: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename var-expansion �[38;5;243m11/06/22 03:13:28.594�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:13:28.68�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:13:28.735�[0m [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 �[1mSTEP:�[0m creating the pod with failed condition �[38;5;243m11/06/22 03:13:28.789�[0m Nov 6 03:13:28.823: INFO: Waiting up to 2m0s for pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" in namespace "var-expansion-606" to be "running" Nov 6 03:13:28.855: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.413825ms Nov 6 03:13:30.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061135209s Nov 6 03:13:32.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060304859s Nov 6 03:13:34.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061534052s Nov 6 03:13:36.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059864129s Nov 6 03:13:38.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060919323s Nov 6 03:13:40.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.061447468s Nov 6 03:13:42.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.061063817s Nov 6 03:13:44.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.060676166s Nov 6 03:13:46.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.06039902s Nov 6 03:13:48.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.061920163s Nov 6 03:13:50.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.060015098s Nov 6 03:13:52.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.061934874s Nov 6 03:13:54.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.061627507s Nov 6 03:13:56.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.060789097s Nov 6 03:13:58.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.060824793s Nov 6 03:14:00.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.060725353s Nov 6 03:14:02.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.060743632s Nov 6 03:14:04.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.061678697s Nov 6 03:14:06.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.060272094s Nov 6 03:14:08.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.060231817s Nov 6 03:14:10.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.061762497s Nov 6 03:14:12.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.061426345s Nov 6 03:14:14.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.060220784s Nov 6 03:14:16.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.060609609s Nov 6 03:14:18.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.060016907s Nov 6 03:14:20.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.061294175s Nov 6 03:14:22.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.060268644s Nov 6 03:14:24.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.060405286s Nov 6 03:14:26.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.060025982s Nov 6 03:14:28.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.062341575s Nov 6 03:14:30.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.059952375s Nov 6 03:14:32.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.060727142s Nov 6 03:14:34.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.060148769s Nov 6 03:14:36.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.06193643s Nov 6 03:14:38.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.059874348s Nov 6 03:14:40.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.060643796s Nov 6 03:14:42.888: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.064378747s Nov 6 03:14:44.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.061652645s Nov 6 03:14:46.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.060469824s Nov 6 03:14:48.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.060301528s Nov 6 03:14:50.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.060292272s Nov 6 03:14:52.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.059990145s Nov 6 03:14:54.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.060274968s Nov 6 03:14:56.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.060386831s Nov 6 03:14:58.885: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.062328377s Nov 6 03:15:00.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.060573695s Nov 6 03:15:02.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.061268828s Nov 6 03:15:04.886: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.062703944s Nov 6 03:15:06.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.060376939s Nov 6 03:15:08.886: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.062362593s Nov 6 03:15:10.886: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.062381949s Nov 6 03:15:12.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.059947064s Nov 6 03:15:14.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.059969242s Nov 6 03:15:16.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.060194842s Nov 6 03:15:18.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.060131014s Nov 6 03:15:20.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.060589349s Nov 6 03:15:22.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.060729335s Nov 6 03:15:24.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.061167831s Nov 6 03:15:26.883: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.059993559s Nov 6 03:15:28.884: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.060872897s Nov 6 03:15:28.912: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.08920876s �[1mSTEP:�[0m updating the pod �[38;5;243m11/06/22 03:15:28.912�[0m Nov 6 03:15:29.481: INFO: Successfully updated pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" �[1mSTEP:�[0m waiting for pod running �[38;5;243m11/06/22 03:15:29.481�[0m Nov 6 03:15:29.482: INFO: Waiting up to 2m0s for pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" in namespace "var-expansion-606" to be "running" Nov 6 03:15:29.510: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.29484ms Nov 6 03:15:31.540: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057993401s Nov 6 03:15:33.539: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057142061s Nov 6 03:15:35.541: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059630317s Nov 6 03:15:37.539: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057296153s Nov 6 03:15:39.545: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063431155s Nov 6 03:15:41.540: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5": Phase="Running", Reason="", readiness=true. Elapsed: 12.058124407s Nov 6 03:15:41.540: INFO: Pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" satisfied condition "running" �[1mSTEP:�[0m deleting the pod gracefully �[38;5;243m11/06/22 03:15:41.54�[0m Nov 6 03:15:41.540: INFO: Deleting pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" in namespace "var-expansion-606" Nov 6 03:15:41.573: INFO: Wait up to 5m0s for pod "var-expansion-460e075f-bea2-4e2d-8bac-5b7f6c60efa5" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 6 03:15:45.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "var-expansion-606" for this suite. �[38;5;243m11/06/22 03:15:45.662�[0m �[38;5;243m<< End Captured GinkgoWriter Output�[0m �[38;5;243m------------------------------�[0m �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m �[38;5;243m------------------------------�[0m �[0m[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243m[Serial] [Slow] Deployment (Pod Resource)�[0m �[1mShould scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:49�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:15:45.699�[0m Nov 6 03:15:45.700: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 03:15:45.701�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:15:45.79�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:15:45.844�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:49 Nov 6 03:15:45.899: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 03:15:45.9�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-9469 �[38;5;243m11/06/22 03:15:45.943�[0m I1106 03:15:45.977576 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-9469, replica count: 1 I1106 03:15:56.028543 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 03:15:56.028�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-9469 �[38;5;243m11/06/22 03:15:56.074�[0m I1106 03:15:56.109148 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-9469, replica count: 1 I1106 03:16:06.159818 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 03:16:11.160: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 6 03:16:11.189: INFO: RC test-deployment: consume 250 millicores in total Nov 6 03:16:11.189: INFO: RC test-deployment: setting consumption to 250 millicores in total Nov 6 03:16:11.189: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:16:11.189: INFO: RC test-deployment: consume 0 MB in total Nov 6 03:16:11.189: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:16:11.189: INFO: RC test-deployment: consume custom metric 0 in total Nov 6 03:16:11.189: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 6 03:16:11.189: INFO: RC test-deployment: disabling mem consumption Nov 6 03:16:11.249: INFO: waiting for 3 replicas (current: 1) Nov 6 03:16:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:16:41.241: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:16:41.241: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:16:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:17:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:17:14.277: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:17:14.277: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:17:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:17:44.313: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:17:44.313: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:17:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:18:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:18:14.353: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:18:14.353: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:18:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:18:44.391: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:18:44.392: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:18:51.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:19:11.281: INFO: waiting for 3 replicas (current: 2) Nov 6 03:19:14.434: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:19:14.434: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:19:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:19:44.472: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:19:44.472: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:19:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:20:11.277: INFO: waiting for 3 replicas (current: 2) Nov 6 03:20:14.509: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:20:14.509: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:20:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:20:44.545: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:20:44.545: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:20:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:21:11.280: INFO: waiting for 3 replicas (current: 2) Nov 6 03:21:14.585: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:21:14.585: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:21:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:21:44.624: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:21:44.624: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:21:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:22:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:22:14.662: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:22:14.662: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:22:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:22:44.701: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:22:44.702: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:22:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:23:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:23:14.737: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:23:14.737: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:23:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:23:44.771: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:23:44.772: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:23:51.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:24:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:24:14.809: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:24:14.809: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:24:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:24:44.846: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:24:44.846: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:24:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:25:11.277: INFO: waiting for 3 replicas (current: 2) Nov 6 03:25:14.882: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:25:14.882: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:25:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:25:44.920: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:25:44.920: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:25:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:26:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:26:14.959: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:26:14.959: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:26:31.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:26:44.994: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:26:44.994: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:26:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:27:11.280: INFO: waiting for 3 replicas (current: 2) Nov 6 03:27:15.031: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:27:15.031: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:27:31.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:27:45.071: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:27:45.071: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:27:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:28:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:28:15.109: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:28:15.110: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:28:31.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:28:45.145: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:28:45.145: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:28:51.277: INFO: waiting for 3 replicas (current: 2) Nov 6 03:29:11.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:29:15.183: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:29:15.184: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:29:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:29:45.221: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:29:45.222: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:29:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:30:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:30:15.258: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:30:15.259: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:30:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:30:45.298: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:30:45.298: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:30:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:31:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:31:11.307: INFO: waiting for 3 replicas (current: 2) Nov 6 03:31:11.307: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205cd0>: { s: "timed out waiting for the condition", } Nov 6 03:31:11.307: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002dfde68, {0x74a0e0e?, 0xc002bfeea0?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000fcdc20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/06/22 03:31:11.34�[0m Nov 6 03:31:11.340: INFO: RC test-deployment: stopping metric consumer Nov 6 03:31:11.340: INFO: RC test-deployment: stopping mem consumer Nov 6 03:31:11.340: INFO: RC test-deployment: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-9469, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:31:21.341�[0m Nov 6 03:31:21.456: INFO: Deleting Deployment.apps test-deployment took: 34.881164ms Nov 6 03:31:21.556: INFO: Terminating Deployment.apps test-deployment pods took: 100.911173ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-9469, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:31:24.124�[0m Nov 6 03:31:24.237: INFO: Deleting ReplicationController test-deployment-ctrl took: 32.835632ms Nov 6 03:31:24.337: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.860037ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 6 03:31:25.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/06/22 03:31:25.819�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-9469". �[38;5;243m11/06/22 03:31:25.819�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/06/22 03:31:25.848�[0m Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:45 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 1 Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:46 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-9z7mn Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:46 +0000 UTC - event for test-deployment-669bb6996d-9z7mn: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-9469/test-deployment-669bb6996d-9z7mn to capz-conf-6qqvv Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:48 +0000 UTC - event for test-deployment-669bb6996d-9z7mn: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:48 +0000 UTC - event for test-deployment-669bb6996d-9z7mn: {kubelet capz-conf-6qqvv} Created: Created container test-deployment Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:50 +0000 UTC - event for test-deployment-669bb6996d-9z7mn: {kubelet capz-conf-6qqvv} Started: Started container test-deployment Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:56 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-hxwsw Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:56 +0000 UTC - event for test-deployment-ctrl-hxwsw: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-9469/test-deployment-ctrl-hxwsw to capz-conf-ppc2q Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:58 +0000 UTC - event for test-deployment-ctrl-hxwsw: {kubelet capz-conf-ppc2q} Created: Created container test-deployment-ctrl Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:58 +0000 UTC - event for test-deployment-ctrl-hxwsw: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:59 +0000 UTC - event for test-deployment-ctrl-hxwsw: {kubelet capz-conf-ppc2q} Started: Started container test-deployment-ctrl Nov 6 03:31:25.848: INFO: At 2022-11-06 03:16:26 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 6 03:31:25.848: INFO: At 2022-11-06 03:16:26 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 2 from 1 Nov 6 03:31:25.849: INFO: At 2022-11-06 03:16:26 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-2npvg Nov 6 03:31:25.849: INFO: At 2022-11-06 03:16:26 +0000 UTC - event for test-deployment-669bb6996d-2npvg: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-9469/test-deployment-669bb6996d-2npvg to capz-conf-ppc2q Nov 6 03:31:25.849: INFO: At 2022-11-06 03:16:28 +0000 UTC - event for test-deployment-669bb6996d-2npvg: {kubelet capz-conf-ppc2q} Created: Created container test-deployment Nov 6 03:31:25.849: INFO: At 2022-11-06 03:16:28 +0000 UTC - event for test-deployment-669bb6996d-2npvg: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 03:31:25.849: INFO: At 2022-11-06 03:16:29 +0000 UTC - event for test-deployment-669bb6996d-2npvg: {kubelet capz-conf-ppc2q} Started: Started container test-deployment Nov 6 03:31:25.849: INFO: At 2022-11-06 03:31:21 +0000 UTC - event for test-deployment-669bb6996d-2npvg: {kubelet capz-conf-ppc2q} Killing: Stopping container test-deployment Nov 6 03:31:25.849: INFO: At 2022-11-06 03:31:21 +0000 UTC - event for test-deployment-669bb6996d-9z7mn: {kubelet capz-conf-6qqvv} Killing: Stopping container test-deployment Nov 6 03:31:25.849: INFO: At 2022-11-06 03:31:24 +0000 UTC - event for test-deployment-ctrl-hxwsw: {kubelet capz-conf-ppc2q} Killing: Stopping container test-deployment-ctrl Nov 6 03:31:25.877: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 03:31:25.877: INFO: Nov 6 03:31:25.908: INFO: Logging node info for node capz-conf-6qqvv Nov 6 03:31:25.936: INFO: Node Info: &Node{ObjectMeta:{capz-conf-6qqvv 21ca7817-6572-4b5d-812e-ce0eb0d5f68a 18048 0 2022-11-06 01:05:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-6qqvv kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-qszhv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.43.193 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:58:86:f4 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 03:30:59 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-6qqvv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:30:59 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:30:59 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:30:59 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:30:59 +0000 UTC,LastTransitionTime:2022-11-06 01:05:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-6qqvv,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-6qqvv,SystemUUID:4FBA08C6-3CF7-43A9-B47F-5DD6399E03F4,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:31:25.937: INFO: Logging kubelet events for node capz-conf-6qqvv Nov 6 03:31:25.965: INFO: Logging pods the kubelet thinks is on node capz-conf-6qqvv Nov 6 03:31:26.015: INFO: kube-proxy-windows-mg9dn started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.015: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 03:31:26.015: INFO: containerd-logger-4c4v9 started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.015: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 03:31:26.015: INFO: csi-proxy-d7klv started at 2022-11-06 01:05:43 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.015: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 03:31:26.015: INFO: calico-node-windows-wq7jf started at 2022-11-06 01:05:13 +0000 UTC (1+2 container statuses recorded) Nov 6 03:31:26.015: INFO: Init container install-cni ready: true, restart count 0 Nov 6 03:31:26.015: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 03:31:26.015: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 03:31:26.178: INFO: Latency metrics for node capz-conf-6qqvv Nov 6 03:31:26.178: INFO: Logging node info for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:31:26.207: INFO: Node Info: &Node{ObjectMeta:{capz-conf-gdu8bn-control-plane-tjg6t 1b062db8-a1d5-4d72-b97f-3f553f9a80bc 17717 0 2022-11-06 01:02:47 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-gdu8bn-control-plane-tjg6t kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-control-plane-r9dv5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.255.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-06 01:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-06 01:02:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-06 01:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-06 01:03:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-06 01:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-06 03:27:13 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-gdu8bn-control-plane-tjg6t,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-06 01:03:36 +0000 UTC,LastTransitionTime:2022-11-06 01:03:36 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:27:13 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:27:13 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:27:13 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:27:13 +0000 UTC,LastTransitionTime:2022-11-06 01:03:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-gdu8bn-control-plane-tjg6t,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78d5859e57514e33b16c735e58b1e9ed,SystemUUID:000037f3-aea5-d84d-b6e2-269548336f74,BootID:2d661860-3c1f-4907-aa6c-ac6c2ce1dffc,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:132977107,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:120025913,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:66202310,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:53027640,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:31:26.207: INFO: Logging kubelet events for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:31:26.236: INFO: Logging pods the kubelet thinks is on node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:31:26.285: INFO: kube-scheduler-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:54 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container kube-scheduler ready: true, restart count 0 Nov 6 03:31:26.285: INFO: etcd-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container etcd ready: true, restart count 0 Nov 6 03:31:26.285: INFO: kube-apiserver-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 03:31:26.285: INFO: calico-node-4tbpv started at 2022-11-06 01:03:13 +0000 UTC (2+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 6 03:31:26.285: INFO: Init container install-cni ready: true, restart count 0 Nov 6 03:31:26.285: INFO: Container calico-node ready: true, restart count 0 Nov 6 03:31:26.285: INFO: calico-kube-controllers-56c5ff4bf8-c9gck started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 6 03:31:26.285: INFO: metrics-server-954b56d74-tp2lc started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container metrics-server ready: true, restart count 0 Nov 6 03:31:26.285: INFO: coredns-64475449fc-jxwjm started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container coredns ready: true, restart count 0 Nov 6 03:31:26.285: INFO: kube-controller-manager-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 6 03:31:26.285: INFO: kube-proxy-gv5gt started at 2022-11-06 01:02:55 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 03:31:26.285: INFO: coredns-64475449fc-9kgrz started at 2022-11-06 01:03:32 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container coredns ready: true, restart count 0 Nov 6 03:31:26.434: INFO: Latency metrics for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:31:26.434: INFO: Logging node info for node capz-conf-ppc2q Nov 6 03:31:26.462: INFO: Node Info: &Node{ObjectMeta:{capz-conf-ppc2q 0e9bff17-74db-40c4-85fd-565404c5c796 18047 0 2022-11-06 01:05:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-ppc2q kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-swkgv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.41.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:f9:7f:62 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 03:30:58 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-ppc2q,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:30:58 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:30:58 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:30:58 +0000 UTC,LastTransitionTime:2022-11-06 01:05:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:30:58 +0000 UTC,LastTransitionTime:2022-11-06 01:05:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-ppc2q,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-ppc2q,SystemUUID:D6A1F803-1C65-4D68-BCD7-387A75C6EDBD,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:31:26.463: INFO: Logging kubelet events for node capz-conf-ppc2q Nov 6 03:31:26.491: INFO: Logging pods the kubelet thinks is on node capz-conf-ppc2q Nov 6 03:31:26.535: INFO: containerd-logger-s25tr started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.535: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 03:31:26.535: INFO: kube-proxy-windows-vmt8g started at 2022-11-06 01:05:08 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.535: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 03:31:26.535: INFO: calico-node-windows-hsdvh started at 2022-11-06 01:05:08 +0000 UTC (1+2 container statuses recorded) Nov 6 03:31:26.535: INFO: Init container install-cni ready: true, restart count 0 Nov 6 03:31:26.535: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 03:31:26.535: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 03:31:26.535: INFO: csi-proxy-vqp4q started at 2022-11-06 01:05:39 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.535: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 03:31:26.676: INFO: Latency metrics for node capz-conf-ppc2q [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-9469" for this suite. �[38;5;243m11/06/22 03:31:26.676�[0m �[38;5;243m------------------------------�[0m �[38;5;9m• [FAILED] [941.009 seconds]�[0m [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) �[38;5;243mtest/e2e/autoscaling/framework.go:23�[0m [Serial] [Slow] Deployment (Pod Resource) �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:48�[0m �[38;5;9m�[1m[It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation�[0m �[38;5;243mtest/e2e/autoscaling/horizontal_pod_autoscaling.go:49�[0m �[38;5;243mBegin Captured GinkgoWriter Output >>�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/06/22 03:15:45.699�[0m Nov 6 03:15:45.700: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/06/22 03:15:45.701�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/06/22 03:15:45.79�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/06/22 03:15:45.844�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:49 Nov 6 03:15:45.899: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/06/22 03:15:45.9�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-9469 �[38;5;243m11/06/22 03:15:45.943�[0m I1106 03:15:45.977576 14 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-9469, replica count: 1 I1106 03:15:56.028543 14 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/06/22 03:15:56.028�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-9469 �[38;5;243m11/06/22 03:15:56.074�[0m I1106 03:15:56.109148 14 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-9469, replica count: 1 I1106 03:16:06.159818 14 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 6 03:16:11.160: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 6 03:16:11.189: INFO: RC test-deployment: consume 250 millicores in total Nov 6 03:16:11.189: INFO: RC test-deployment: setting consumption to 250 millicores in total Nov 6 03:16:11.189: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:16:11.189: INFO: RC test-deployment: consume 0 MB in total Nov 6 03:16:11.189: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:16:11.189: INFO: RC test-deployment: consume custom metric 0 in total Nov 6 03:16:11.189: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 6 03:16:11.189: INFO: RC test-deployment: disabling mem consumption Nov 6 03:16:11.249: INFO: waiting for 3 replicas (current: 1) Nov 6 03:16:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:16:41.241: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:16:41.241: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:16:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:17:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:17:14.277: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:17:14.277: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:17:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:17:44.313: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:17:44.313: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:17:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:18:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:18:14.353: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:18:14.353: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:18:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:18:44.391: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:18:44.392: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:18:51.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:19:11.281: INFO: waiting for 3 replicas (current: 2) Nov 6 03:19:14.434: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:19:14.434: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:19:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:19:44.472: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:19:44.472: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:19:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:20:11.277: INFO: waiting for 3 replicas (current: 2) Nov 6 03:20:14.509: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:20:14.509: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:20:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:20:44.545: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:20:44.545: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:20:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:21:11.280: INFO: waiting for 3 replicas (current: 2) Nov 6 03:21:14.585: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:21:14.585: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:21:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:21:44.624: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:21:44.624: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:21:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:22:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:22:14.662: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:22:14.662: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:22:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:22:44.701: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:22:44.702: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:22:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:23:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:23:14.737: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:23:14.737: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:23:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:23:44.771: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:23:44.772: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:23:51.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:24:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:24:14.809: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:24:14.809: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:24:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:24:44.846: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:24:44.846: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:24:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:25:11.277: INFO: waiting for 3 replicas (current: 2) Nov 6 03:25:14.882: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:25:14.882: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:25:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:25:44.920: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:25:44.920: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:25:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:26:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:26:14.959: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:26:14.959: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:26:31.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:26:44.994: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:26:44.994: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:26:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:27:11.280: INFO: waiting for 3 replicas (current: 2) Nov 6 03:27:15.031: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:27:15.031: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:27:31.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:27:45.071: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:27:45.071: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:27:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:28:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:28:15.109: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:28:15.110: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:28:31.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:28:45.145: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:28:45.145: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:28:51.277: INFO: waiting for 3 replicas (current: 2) Nov 6 03:29:11.279: INFO: waiting for 3 replicas (current: 2) Nov 6 03:29:15.183: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:29:15.184: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:29:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:29:45.221: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:29:45.222: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:29:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:30:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:30:15.258: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:30:15.259: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:30:31.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:30:45.298: INFO: RC test-deployment: sending request to consume 250 millicores Nov 6 03:30:45.298: INFO: ConsumeCPU URL: {https capz-conf-gdu8bn-1d1ca3bf.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9469/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 6 03:30:51.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:31:11.278: INFO: waiting for 3 replicas (current: 2) Nov 6 03:31:11.307: INFO: waiting for 3 replicas (current: 2) Nov 6 03:31:11.307: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205cd0>: { s: "timed out waiting for the condition", } Nov 6 03:31:11.307: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002dfde68, {0x74a0e0e?, 0xc002bfeea0?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000fcdc20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/06/22 03:31:11.34�[0m Nov 6 03:31:11.340: INFO: RC test-deployment: stopping metric consumer Nov 6 03:31:11.340: INFO: RC test-deployment: stopping mem consumer Nov 6 03:31:11.340: INFO: RC test-deployment: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-9469, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:31:21.341�[0m Nov 6 03:31:21.456: INFO: Deleting Deployment.apps test-deployment took: 34.881164ms Nov 6 03:31:21.556: INFO: Terminating Deployment.apps test-deployment pods took: 100.911173ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-9469, will wait for the garbage collector to delete the pods �[38;5;243m11/06/22 03:31:24.124�[0m Nov 6 03:31:24.237: INFO: Deleting ReplicationController test-deployment-ctrl took: 32.835632ms Nov 6 03:31:24.337: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 100.860037ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 6 03:31:25.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/06/22 03:31:25.819�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-9469". �[38;5;243m11/06/22 03:31:25.819�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/06/22 03:31:25.848�[0m Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:45 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 1 Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:46 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-9z7mn Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:46 +0000 UTC - event for test-deployment-669bb6996d-9z7mn: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-9469/test-deployment-669bb6996d-9z7mn to capz-conf-6qqvv Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:48 +0000 UTC - event for test-deployment-669bb6996d-9z7mn: {kubelet capz-conf-6qqvv} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:48 +0000 UTC - event for test-deployment-669bb6996d-9z7mn: {kubelet capz-conf-6qqvv} Created: Created container test-deployment Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:50 +0000 UTC - event for test-deployment-669bb6996d-9z7mn: {kubelet capz-conf-6qqvv} Started: Started container test-deployment Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:56 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-hxwsw Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:56 +0000 UTC - event for test-deployment-ctrl-hxwsw: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-9469/test-deployment-ctrl-hxwsw to capz-conf-ppc2q Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:58 +0000 UTC - event for test-deployment-ctrl-hxwsw: {kubelet capz-conf-ppc2q} Created: Created container test-deployment-ctrl Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:58 +0000 UTC - event for test-deployment-ctrl-hxwsw: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 6 03:31:25.848: INFO: At 2022-11-06 03:15:59 +0000 UTC - event for test-deployment-ctrl-hxwsw: {kubelet capz-conf-ppc2q} Started: Started container test-deployment-ctrl Nov 6 03:31:25.848: INFO: At 2022-11-06 03:16:26 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 6 03:31:25.848: INFO: At 2022-11-06 03:16:26 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-669bb6996d to 2 from 1 Nov 6 03:31:25.849: INFO: At 2022-11-06 03:16:26 +0000 UTC - event for test-deployment-669bb6996d: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-669bb6996d-2npvg Nov 6 03:31:25.849: INFO: At 2022-11-06 03:16:26 +0000 UTC - event for test-deployment-669bb6996d-2npvg: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-9469/test-deployment-669bb6996d-2npvg to capz-conf-ppc2q Nov 6 03:31:25.849: INFO: At 2022-11-06 03:16:28 +0000 UTC - event for test-deployment-669bb6996d-2npvg: {kubelet capz-conf-ppc2q} Created: Created container test-deployment Nov 6 03:31:25.849: INFO: At 2022-11-06 03:16:28 +0000 UTC - event for test-deployment-669bb6996d-2npvg: {kubelet capz-conf-ppc2q} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 6 03:31:25.849: INFO: At 2022-11-06 03:16:29 +0000 UTC - event for test-deployment-669bb6996d-2npvg: {kubelet capz-conf-ppc2q} Started: Started container test-deployment Nov 6 03:31:25.849: INFO: At 2022-11-06 03:31:21 +0000 UTC - event for test-deployment-669bb6996d-2npvg: {kubelet capz-conf-ppc2q} Killing: Stopping container test-deployment Nov 6 03:31:25.849: INFO: At 2022-11-06 03:31:21 +0000 UTC - event for test-deployment-669bb6996d-9z7mn: {kubelet capz-conf-6qqvv} Killing: Stopping container test-deployment Nov 6 03:31:25.849: INFO: At 2022-11-06 03:31:24 +0000 UTC - event for test-deployment-ctrl-hxwsw: {kubelet capz-conf-ppc2q} Killing: Stopping container test-deployment-ctrl Nov 6 03:31:25.877: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 03:31:25.877: INFO: Nov 6 03:31:25.908: INFO: Logging node info for node capz-conf-6qqvv Nov 6 03:31:25.936: INFO: Node Info: &Node{ObjectMeta:{capz-conf-6qqvv 21ca7817-6572-4b5d-812e-ce0eb0d5f68a 18048 0 2022-11-06 01:05:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-6qqvv kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-md-win-996555db8-qszhv cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-md-win-996555db8 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.43.193 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:58:86:f4 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet.exe Update v1 2022-11-06 01:05:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-06 01:05:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-06 01:05:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-06 01:06:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-06 02:09:10 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-11-06 03:30:59 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-6qqvv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakePTSRes: {{10 0} {<nil>} 10 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:30:59 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:30:59 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:30:59 +0000 UTC,LastTransitionTime:2022-11-06 01:05:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:30:59 +0000 UTC,LastTransitionTime:2022-11-06 01:05:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-6qqvv,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-6qqvv,SystemUUID:4FBA08C6-3CF7-43A9-B47F-5DD6399E03F4,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:269514097,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:206103324,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:97bc10aa5000a0ee1c842ac32771fe7a45a3a5ca507711bdf57ae2eb5f293e2b docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258343,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:ea8b55bde9aed6a649582a6e21029577430661c743d94b3a5e93d57e648874a2 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005624,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:31:25.937: INFO: Logging kubelet events for node capz-conf-6qqvv Nov 6 03:31:25.965: INFO: Logging pods the kubelet thinks is on node capz-conf-6qqvv Nov 6 03:31:26.015: INFO: kube-proxy-windows-mg9dn started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.015: INFO: Container kube-proxy ready: true, restart count 0 Nov 6 03:31:26.015: INFO: containerd-logger-4c4v9 started at 2022-11-06 01:05:13 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.015: INFO: Container containerd-logger ready: true, restart count 0 Nov 6 03:31:26.015: INFO: csi-proxy-d7klv started at 2022-11-06 01:05:43 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.015: INFO: Container csi-proxy ready: true, restart count 0 Nov 6 03:31:26.015: INFO: calico-node-windows-wq7jf started at 2022-11-06 01:05:13 +0000 UTC (1+2 container statuses recorded) Nov 6 03:31:26.015: INFO: Init container install-cni ready: true, restart count 0 Nov 6 03:31:26.015: INFO: Container calico-node-felix ready: true, restart count 1 Nov 6 03:31:26.015: INFO: Container calico-node-startup ready: true, restart count 0 Nov 6 03:31:26.178: INFO: Latency metrics for node capz-conf-6qqvv Nov 6 03:31:26.178: INFO: Logging node info for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:31:26.207: INFO: Node Info: &Node{ObjectMeta:{capz-conf-gdu8bn-control-plane-tjg6t 1b062db8-a1d5-4d72-b97f-3f553f9a80bc 17717 0 2022-11-06 01:02:47 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-gdu8bn-control-plane-tjg6t kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-1] map[cluster.x-k8s.io/cluster-name:capz-conf-gdu8bn cluster.x-k8s.io/cluster-namespace:capz-conf-gdu8bn cluster.x-k8s.io/machine:capz-conf-gdu8bn-control-plane-r9dv5 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-gdu8bn-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.255.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-06 01:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-06 01:02:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-06 01:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-06 01:03:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-06 01:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-06 03:27:13 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-gdu8bn/providers/Microsoft.Compute/virtualMachines/capz-conf-gdu8bn-control-plane-tjg6t,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-06 01:03:36 +0000 UTC,LastTransitionTime:2022-11-06 01:03:36 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-06 03:27:13 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-06 03:27:13 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-06 03:27:13 +0000 UTC,LastTransitionTime:2022-11-06 01:02:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-06 03:27:13 +0000 UTC,LastTransitionTime:2022-11-06 01:03:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-gdu8bn-control-plane-tjg6t,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78d5859e57514e33b16c735e58b1e9ed,SystemUUID:000037f3-aea5-d84d-b6e2-269548336f74,BootID:2d661860-3c1f-4907-aa6c-ac6c2ce1dffc,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,KubeProxyVersion:v1.26.0-alpha.3.239+1f9e20eb8617e3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-apiserver:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:132977107,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-controller-manager:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:120025913,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-proxy:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:66202310,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler-amd64:v1.26.0-alpha.3.239_1f9e20eb8617e3 registry.k8s.io/kube-scheduler:v1.26.0-alpha.3.239_1f9e20eb8617e3],SizeBytes:53027640,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 03:31:26.207: INFO: Logging kubelet events for node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:31:26.236: INFO: Logging pods the kubelet thinks is on node capz-conf-gdu8bn-control-plane-tjg6t Nov 6 03:31:26.285: INFO: kube-scheduler-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:54 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container kube-scheduler ready: true, restart count 0 Nov 6 03:31:26.285: INFO: etcd-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:53 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container etcd ready: true, restart count 0 Nov 6 03:31:26.285: INFO: kube-apiserver-capz-conf-gdu8bn-control-plane-tjg6t started at 2022-11-06 01:02:52 +0000 UTC (0+1 container statuses recorded) Nov 6 03:31:26.285: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 03:31:26.285: INFO: calico-node-4tbpv started at 2022-11-06 01:03:13 +0000 UTC (2+1 container sta